https://airwiki.elet.polimi.it/api.php?action=feedcontributions&user=CristianMandelli&feedformat=atomAIRWiki - User contributions [en]2024-03-28T18:48:23ZUser contributionsMediaWiki 1.25.6https://airwiki.elet.polimi.it/index.php?title=Cameras,_lenses_and_mirrors&diff=13484Cameras, lenses and mirrors2011-08-18T05:58:31Z<p>CristianMandelli: /* List of Cameras */</p>
<hr />
<div>==IMPORTANT NOTES==<br />
'''Never touch the sensor element (CCD or CMOS) of a camera with anything!''' It can very easily be scratched.<br />
<br />
'''Never touch the glass elements of a lens with your hands!''' The oil from human skin will cause damage.<br />
<br />
==Cameras==<br />
In the AIRLab you can find different kind of cameras. These are the main groups:<br />
*'''Analogue cameras'''. Video output is given as an electrical signal, which needs analogue-to-digital conversion to be processed by a computer; this is done by a specific card called ''frame grabber'' or ''video capture card'' (the latter tend to be the lowest-performance items; see [[Cameras, lenses and mirrors#Frame grabbers]] for details). Analogue video is outdated for computer vision and robotics applications, due to its cost, low performance and complexity; nowadays digital camera systems (such as all the ones listed below) are always preferred.<br />
*'''USB cameras'''. Usually very cheap, they are suitable for low-performance applications (i.e. those where low frame rate is needed and low image quality can be accepted). Their main advantage (along with cost) is the fact that every modern computer has USB ports. The fact that the USB standard includes 5V DC power supply lines helps simplifying camera design and use.<br />
*'''FireWire cameras'''. The FireWire (or IEEE1394) bus is generally used for low-end industrial cameras, i.e. devices with technical characteristics much superior to those typical of USB cameras but low-performance according to typical machine vision standards. Industrial cameras usually give to the user a much wider control over the acquisition parameters compared to consumer cameras, and therefore they are usually preferred in robotics; their downside is the higher cost. There are different versions of IEE1394 link (see http://en.wikipedia.org/wiki/Firewire for details), with different bitrates, starting from the 400Mbit/s FireWire 400. Generally they are all considered superior to USB 2.0, even if theoretical bandwidth is lower for FireWire 400. Firewire ports can include power supply lines, but some interfaces (and in particular those on portable computers) omit them. Although the use of FireWire interfaces has expanded in recent years, they are not yet considered a standard feature for motherboards.<br />
*'''GigE Vision cameras'''. GigE Vision (or Gigabit Ethernet Vision) is a rather new connection standard for machine vision, based upon the established Ethernet protocol in its Gigabit (i.e. 1000Mbps) version. It is very interesting, as complex multiple-camera systems can be easily built using existing (Gigabit) Ethernet hardware, such as cables and switches. Vision data is acquired simply through a generic Ethernet port, commonly found on motherboards or easily added. However, 100Mbps (or ''fast Ethernet'') ports are not guaranteed to work and can sustain only modest video streams; on the other hand, 1000Mbps ports are now standard on motherboards, so this will not be a problem anymore in a few years. It seems that GigE Vision is becoming the most common interface for low- to medium-performance industrial cameras.<br />
*'''CameraLink cameras'''. Cameralink is a high-speed interface expressly developed for high-performance machine vision applications. It is a point-to-point link, i.e. a CameraLink connection is used to connect a single camera to a digital acquisition card (''frame grabber''). Its diffusion is limited to applications where extreme frame rates ''and'' resolutions are needed, because CameraLink gear is very expensive.<br />
*'''ST Camera boards'''. Cameras with cell phone sensor and ARM processor for onboard computation.<br />
<br />
The following is a list of the cameras available in the AIRLab. (To be precise, it is a list of the cameras that are modern enough to be useful.) For each of them the main specifications (and a link to the full specifications) are given. Details on the different types of lens mount are given below in [[Cameras, lenses and mirrors#Lenses]]. The 'how many?' field tells if multiple, identical items are available. Finally, the 'where?' field tells you in which of the AIRLab sites (listed in [[The Labs]]) you can find an item, and the 'project' field is used to specify which project (if any) is using it.<br />
<br />
Ah, one last thing. People like to actually ''find'' things when they look for them, so '''don't forget to update the table when you move something away from its current location'''. If you don't know where you are taking it, just put your name in the table.<br />
<br />
<br />
==List of Cameras==<br />
{| border="1" cellpadding="5" cellspacing="0"<br />
!resolution<br />
!B/W, color<br />
!max. frame rate<br />
!sensor size<br />
!interface<br />
!maker<br />
!model<br />
!lens mount<br />
!how many?<br />
!where?<br />
!project<br />
!link to full specifications and/or manuals<br />
|-<br />
|1628x1236<br />
|B/W<br />
|24fps<br />
|1/1.8"<br />
|CameraLink<br />
|Hitachi<br />
|KP-F200CL<br />
|C-mount<br />
|1<br />
|DEI<br />
|<br />
|[[media:KP-F200-Op_Manual.pdf]]<br />
|-<br />
|752x480<br />
|color<br />
|70fps<br />
|1/3"<br />
|GigE<br />
|Prosilica<br />
|GC750C<br />
|C-mount<br />
|3<br />
|Lambrate (1/3), [[User:SimoneTognetti| Simone Tognetti]](from 19/05/2009, dal 14/12/2009 sono impiegate per esperimenti Affective nell'Airlab del DEI)(2/3)<br />
|Driving companions (2/3)<br />
|http://www.prosilica.com/products/gc_series.html<br />
|-<br />
|659x493<br />
|color<br />
|90fps<br />
|1/3"<br />
|GigE<br />
|Prosilica<br />
|GC650C<br />
|C-mount<br />
|1<br />
|???<br />
|???<br />
|http://www.prosilica.com/products/gc_series.html<br />
|-<br />
|1024x768<br />
|color<br />
|30fps<br />
|1/3"<br />
|GigE<br />
|Prosilica<br />
|GC1020C<br />
|C-mount<br />
|2<br />
|Lambrate (2/2)<br />
|RAWSEEDS (1/2)<br />
|http://www.prosilica.com/products/gc_series.html<br />
|-<br />
|CCIR (625 lines)<br />
|B/W<br />
|CCIR (50fps, interlaced)<br />
|2/3"<br />
|analogue<br />
|Sony<br />
|XC-ST70CE<br />
|C-mount<br />
|2<br />
|DEI (2/2)<br />
|<br />
|[[media:XCST70E_manual.pdf]]<br />
|-<br />
|659x494<br />
|color<br />
|30fps<br />
|1/4"<br />
|FireWire 400<br />
|Unibrain<br />
|Fire-i 400 industrial<br />
|C-mount<br />
|3<br />
|Lambrate (3/3)<br />
|RAWSEEDS (3/3)<br />
|http://www.unibrain.com/Products/VisionImg/Fire_i_400_Industrial.htm<br />
|-<br />
|659x494<br />
|color<br />
|30fps<br />
|1/4"<br />
|FireWire 400<br />
|Unibrain<br />
|Fire-i board camera<br />
|proprietary<br />
|8<br />
|Lambrate (3/8), Bovisa (2/8), [[User:PaoloCalloni]] (1/8), [[User:DavideMigliore]] (1/8), [[User:CristianoAlessandro]] (1/8),<br />
<br />
presa 1 a fine febbraio10 con lente wide (quella di riserva di robocom), montaggio "a la rizzi" con lastrine di plexiglass e pezzo di profilato item [[User:Domenicogsorrenti]] (1/8)<br />
|RAWSEEDS (2/8), MRT (?/8)<br />
queste sono quelle "nuove"? se si una e' su rabbiati, portiere di mrt, sin da cuvio, e' nella testa omnidir Domenicogsorrenti 21.04.09<br />
<br />
1 nuova e' la frontale di recam<br />
<br />
1 nuova sulla testa omnidir di ridan<br />
<br />
|http://www.unibrain.com/Products/VisionImg/Fire_i_BC.htm<br />
|-<br />
|640x480<br />
|color<br />
|30fps<br />
|1/4"<br />
|FireWire 400<br />
|Unibrain<br />
|Fire-i digital camera<br />
|fixed optics (4.3mm, f2.0)<br />
|4<br />
|<br />
1 e' sulla testa omnidir di rigo<br />
<br />
1 e' sulla testa omnidir di recam<br />
<br />
1 e' sulla testa omnidir mrt05-03 (armadio domenico@unimib)<br />
<br />
1 e' sulla testa omnidir mrt05-04 (armadio domenico@unimib)<br />
|<br />
|http://www.unibrain.com/Products/VisionImg/Fire_i_DC.htm<br />
|-<br />
|640x480 dual sensor, 9cm baseline<br />
|color<br />
|30fps<br />
|1/3"<br />
|FireWire 400<br />
|Videre Design<br />
|STOC stereo-on-a-chip stereo camera<br />
|C-mount, fitted with two 3.5mm, f1.6, 1/2" lenses<br />
|1<br />
|Lambrate => li lin office => Domenicogsorrenti 13.01.09 => giulio fontana 23.01.09<br />
|<br />
|http://www.videredesign.com/vision/stoc.htm<br />
|-<br />
|640x480<br />
|color<br />
|60fps<br />
|1/3"<br />
|FireWire 400<br />
|Videre Design<br />
|DCSG (associated with STOC)<br />
|C-mount, fitted with one 3.5mm, f1.6, 1/2" lens<br />
|1<br />
|Lambrate<br />
|<br />
|http://www.videredesign.com/vision/dcsg.htm<br />
|-<br />
|?<br />
|color<br />
|30 fps<br />
|1/3.8 inch optical format<br />
|?<br />
|ST Microelectronics<br />
|ST1-Cam + ST2-Cam<br />
|integrated<br />
|2<br />
|ST1-Cam (STLCam (ST LEGO Camera)) (with Anil until 15.10.2010)[[User:AnilKoyuncu| Anil Koyuncu]], ST2-Cam [[User:LorenzoConsolaro | Lorenzo Consolaro]] and [[User:DarioCecchetto | Dario Cecchetto]] <br />
|ST1-Cam [[RunBot: a Robogame Robot]]<br />
| [[Media:Cameradatasheet.pdf]],[[Media:Rvs-v1-0.pdf]], [[Media:RVS_Datasheet_v2.1.pdf]] ,http://www.danielecaltabiano.com/wwme/ST-SW/st-sw.htm, [[Media:Cam_pin_map.pdf]]<br />
|-<br />
|?<br />
|color<br />
|?<br />
|?<br />
|?<br />
|ST Microelectronics<br />
|ST5-CamMic + ST6-CamMic<br />
|integrated with microphone<br />
|2<br />
|ST5-CamMic [[User:AndreaBonarini| Andrea Bonarini]], ST6-CamMic AIRLab per E-2? <br />
|ST6-CamMic [[E-2?]]<br />
|<br />
|-<br />
|?<br />
|color<br />
|?<br />
|?<br />
|?<br />
|ST Microelectronics<br />
|ST4-DC (Demo board)<br />
|integrated<br />
|1<br />
|[[User:RaffaelePetta|Raffaele Petta]]<br />
|-<br />
|?<br />
|color<br />
|?<br />
|?<br />
|?<br />
|ST Microelectronics<br />
|ST5-CamMic + ST6-CamMic<br />
|integrated with microphone<br />
|2<br />
|ST5-CamMic [[User:AndreaBonarini| Andrea Bonarini]], ST6-CamMic AIRLab per E-2? <br />
|ST6-CamMic [[E-2?]]<br />
|<br />
|-<br />
|?<br />
|color<br />
|30 FPS<br />
|?<br />
|USB 2<br />
|?<br />
|Microsoft Kinect<br />
|?<br />
|1<br />
|[[User:CristianMandelli|Cristian Mandelli]], [[User:DeborahZamponi|Deborah Zamponi]] July/August 2011<br />
|[[http://airlab.elet.polimi.it/index.php/E-2%3F_-_A_robot_for_exhibitions E2? A robot for exhibitions]]<br />
|<br />
|<br />
|}<br />
<br><br />
<br />
==Lenses==<br />
Be aware that sensor dimension (i.e. its diagonal, measured in fractions of an inch) is ''not'' the same for all cameras. Therefore one of the key specifications for a lens is the maximum sensor dimension supported. If you use a given lens with too big a sensor, the edges of the image will be black as they lie outside the circle of the projected image. Also beware of the strange convention used for sensor diagonals, i.e. a fraction in the form A/B" where A and B are integer ''or non-integer'' numbers. For instance an 1/2" sensor is smaller than an 1/1.8" one.<br />
The variability of sensor dimensions has another side effect: the same lens has a different angle of view if you change the sensor size. Therefore the same lens can behave as a wide-angle with a large sensor and as a telephoto with a small sensor.<br />
<br />
An useful guide to lenses (in Italian or English) can be found at http://www.rapitron.it/guidaob.htm.<br />
<br />
The following is a list of the actual lenses available in the AIRLab. For each of them the main specifications (and a link to the maker's or vendor's page for full specifications) are given. A '?' means an unknown parameter: if you know its value or experimentally find out it when using the lens (e.g. the maximum sensor size), please ''update the table'' before the information is lost again! Lenses having 'M12x0.5' in Column 'mount type' are only usable with Unibrain's Fire-i board cameras. A 'YES' in the 'Mpixel' column indicates a so-called ''Megapixel lens'', i.e. a high quality, low-distortion lens designed for high-resolution industrial cameras (typically having large sensors); please note that some of these are specifically designed for B/W (i.e. black and white) cameras. The 'how many?' field tells if multiple, identical items are available. Finally, the 'where?' field tells you in which of the AIRLab sites (listed in [[The Labs]]) you can find an item, and the 'project' field is used to specify which project (if any) is using it. <br />
<br />
Ah, one last thing. People like to actually ''find'' things when they look for them, so '''don't forget to update the table when you move something away from its current location'''. If you don't know where you are bringing it, just put your name in the table.<br />
<br />
===C-mount and CS-mount lenses===<br />
Industrial cameras usually have interchangeable lenses. This allows for the choice of the lens that is more suitable to the considered application. There are two main standards for industrial camera lenses: '''C-mount''' and '''CS-mount'''. Both are screw-type mounts. CS-mount is simply a modified C-mount where the distance between the back of the lens and the sensor element (CCD or CMOS) is shorter: therefore a CS-mount lens can be mounted on a C-mount camera if an ''adapter ring'' (i.e. a distancing cylinder with suitable threads) is placed between them. It is impossible, though, to use a C-mount lens on a CS-mount camera: if you try you will almost certainly break the sensor, scratch the lens, or both. Just because a lens fits a camera, it doesn't mean it can be actually mounted on it!<br />
<br />
{| border="1" cellpadding="5" cellspacing="0"<br />
!focal length<br />
!max. aperture<br />
!max. sensor size<br />
!mount type<br />
!maker<br />
!model<br />
!Mpixel<br />
!how many?<br />
!where?<br />
!project<br />
!link to full specifications<br />
|-<br />
|3.5mm<br />
|f1.4<br />
|?<br />
|C-mount<br />
|?<br />
|?<br />
|?<br />
|1<br />
|Lambrate<br />
|LURCH<br />
|?<br />
|-<br />
|4.0mm<br />
|f2.0<br />
|1/2"<br />
|C-mount<br />
|Microtron<br />
|FV0420<br />
|YES (B/W only)<br />
|2<br />
|Lambrate<br />
|<br />
|http://www.rapitron.it/obmegpxman1.htm<br />
|-<br />
|4.5mm<br />
|f1.4<br />
|1/2"<br />
|C-mount<br />
|?<br />
|?<br />
|?<br />
|1<br />
|DEI<br />
|<br />
|?<br />
|-<br />
|4.8mm<br />
|f1.8<br />
|2/3"<br />
|C-mount<br />
|Computar<br />
|M0518<br />
|NO<br />
|1<br />
|DEI<br />
|<br />
|http://www.computar.com/cctvprod/computar/mono/048.html<br />
|-<br />
|6mm<br />
|f1.4<br />
|?<br />
|C-mount<br />
|?<br />
|?<br />
|?<br />
|1<br />
|Lambrate (?)<br />
|<br />
|?<br />
|-<br />
|6mm<br />
|f1.4<br />
|1/2"<br />
|C-mount<br />
|Goyo<br />
|GMHR26014MCN<br />
|YES<br />
|4<br />
|Lambrate<br />
|2 nell'armadio + 2 scatole vuote<br />
|http://www.goyooptical.com/products/industrial/hrmegapixel.html<br />
|-<br />
|8mm<br />
|f1.4<br />
|?<br />
|C-mount<br />
|?<br />
|?<br />
|?<br />
|1<br />
|DEI<br />
|<br />
|?<br />
|-<br />
|8mm<br />
|f1.4<br />
|2/3"<br />
|C-mount<br />
|Goyo<br />
|GMHR38014MCN<br />
|YES<br />
|2<br />
|Lambrate<br />
|Only 1...<br />
|http://www.goyooptical.com/products/industrial/hrmegapixel.html<br />
|-<br />
|8.5mm<br />
|f1.3<br />
|2/3"<br />
|C-mount<br />
|Computar<br />
|?<br />
|?<br />
|2<br />
|DEI<br />
|<br />
|(old model)<br />
|-<br />
|12mm<br />
|f1.8<br />
|2/3"<br />
|C-mount<br />
|?<br />
|?<br />
|?<br />
|2<br />
|1 Lambrate + ? DEI<br />
|<br />
|<br />
|-<br />
|12mm<br />
|f1.4<br />
|2/3"<br />
|C-mount<br />
|Goyo<br />
|GMHR31214MCN<br />
|YES<br />
|2<br />
|Lambrate<br />
|<br />
|http://www.goyooptical.com/products/industrial/hrmegapixel.html<br />
|-<br />
|15mm<br />
|f2.0<br />
|2/3"<br />
|C-mount<br />
|Microtron<br />
|FV1520<br />
|YES<br />
|1<br />
|Lambrate<br />
|<br />
|http://www.rapitron.it/obmegpxman1.htm<br />
|-<br />
|6-15mm<br />
|f1.4<br />
|?<br />
|C-mount<br />
|?<br />
|?<br />
|?<br />
|1<br />
|Lambrate<br />
|<br />
|?<br />
|-<br />
|12.5-75mm<br />
|f1.8<br />
|?<br />
|C-mount<br />
|?<br />
|?<br />
|?<br />
|1<br />
|DEI<br />
|<br />
|?<br />
|}<br />
<br><br />
<br />
===M12 lenses===<br />
We also use M12 lenses. These lenses are very simple, with no iris, and very small. Their mounting system is an M12x0.5 metric screw thread. They are commonly used for webcams, and usually do not provide the top optical quality.<br />
<br />
{| border="1" cellpadding="5" cellspacing="0"<br />
!focal length<br />
!max. aperture<br />
!max. sensor size<br />
!mount type<br />
!maker<br />
!model<br />
!Mpixel<br />
!how many?<br />
!where?<br />
!project<br />
!link to full specifications<br />
|-<br />
|2.1mm<br />
|f2.0<br />
|1/4"<br />
|M12x0.5<br />
|Unibrain<br />
|2042<br />
|NO<br />
|6<br />
|<br />
1 e' a bovisa nelle mani di marcello<br />
<br />
1 e' a lambrate su un giano riusato come robowii<br />
<br />
1 e' a bovisa sulla frontale del triskar recam<br />
<br />
1 e' in mano a martino per fare una frontale => 06.05.09 E' in bovisa montata sul triskar #3<br />
<br />
1 l'ha Davide Migliore per acquisizioni monoslam<br />
<br />
1 e' sulla testa omnidir di rabbiati<br />
<br />
Domenicogsorrenti 04.05.09<br />
|MRT midsize, robowii, monoslam<br />
|http://www.unibrain.com/Products/VisionImg/Fire_i_BC.htm<br />
|-<br />
|4.3mm, no IR filter<br />
|f2.0<br />
|1/4"<br />
|M12x0.5<br />
|Unibrain<br />
|2046<br />
|NO<br />
|1<br />
|Lambrate (1/1)<br />
|<br />
|http://www.unibrain.com/Products/VisionImg/Fire_i_BC.htm<br />
|-<br />
|4.3mm<br />
|f2.0<br />
|1/4"<br />
|M12x0.5<br />
|Unibrain<br />
|2043<br />
|NO<br />
|3<br />
|Bovisa (1/3), Lambrate (2/3)<br />
|<br />
|http://www.unibrain.com/Products/VisionImg/Fire_i_BC.htm<br />
|-<br />
|8mm<br />
|f2.0<br />
|1/4"<br />
|M12x0.5<br />
|Unibrain<br />
|2044<br />
|NO<br />
|1<br />
|Lambrate (1/1)<br />
|<br />
|http://www.unibrain.com/Products/VisionImg/Fire_i_BC.htm<br />
|}<br />
<br><br />
<br />
==Frame grabbers==<br />
As previously said, a '''frame grabber''' is an electronic board that connects to one or more cameras, and converts the signals from the cameras into a data stream that can be elaborated by a computer. They are usually designed as expansion boards to be fitted into the computer case. Frame grabbers are necessary for ''analogue cameras'' (as they include the analogue/digital converters) or for CameraLink digital cameras (in this case the frame grabber is essentially a high speed dedicated digital interface). Other kinds of digital cameras don't need a frame grabber: this is one of the main advantages of digital cameras over analogue ones in machine vision applications, where the processing is almost always performed by computers.<br />
In the AIRLab two models of frame grabber are available:<br />
*a digital frame grabber from Euresys, model Expert 2, having two CameraLink inputs (http://www.euresys.com/Products/grablink/GrablinkSeries.asp). ''Notes: needs a PCI-X slot; one of the inputs is not working due to a fault.''<br />
*two multichannel analogue frame grabbers from Matrox, model Meteor II/Multi-Channel, having three analogue inputs that can be combined into a single three-channel RGB analogue input (http://www.matrox.com/imaging/support/old_products/home.cfm). ''Note: one item is permanently mounted on the MO.RO.1 robot: see [[The MO.RO. family]] for details.''<br />
*two multichannel analogue frame grabbers from Matrox, model Meteor II/Multi-Channel, having three analogue inputs that can be combined into a single three-channel RGB analogue input (http://www.matrox.com/imaging/support/old_products/home.cfm). ''Note: one item is permanently mounted on the MO.RO.1 robot: see [[The MO.RO. family]] for details.''<br />
*two single-channel analogue frame grabbers from Matrox, models Meteor and Meteor Pro (http://www.matrox.com/imaging/support/old_products/home.cfm).<br />
All the frame grabbers (except the one on the MO.RO.1) are currently in AIRLab/DEI. If you move one of them, please '''write it down here'''... and do it NOW!<br />
<br />
<br />
==Mirrors==<br />
Much work has been done and is being done at the AIRLab on the topic of '''omnidirectional (machine) vision''' (sometimes referred to as ''omnivision''). Omnidirectional vision systems use special hardware to overcome the limitations of conventional vision systems in terms of field of view. The approach to this problem that we generally adopt is the use of conventional cameras in association with convex '''mirrors''', i.e. the capturing of the image reflected by a suitably-shaped mirror with a camera. The possibility of designing mirrors with specific geometric properties gives a very useful means to control the geometric behaviour of the whole camera+mirror system.<br />
<br />
TODO for someone who knows better ;-) : mirror list<br />
<br />
==Cable==<br />
The complete list of cable for camera connection and/or power is under construction. You can partecipate listing below which cables are you using...<br />
<br />
{| border="1" cellpadding="5" cellspacing="0"<br />
!Type<br />
!length<br />
!how many?<br />
!where?<br />
!project<br />
|-<br />
|FireWire 6-6 <br />
|?<br />
|2<br />
|Bicocca (refer to Domenico G. Sorrenti, 2009-11-11)<br />
|?<br />
|-<br />
|FireWire 6-6 <br />
|?<br />
|1<br />
|on LURCH wheelchair<br />
|LURCH<br />
|}</div>CristianMandellihttps://airwiki.elet.polimi.it/index.php?title=Talk:Human-Robot_Gestual_Interaction&diff=13382Talk:Human-Robot Gestual Interaction2011-07-17T08:41:23Z<p>CristianMandelli: </p>
<hr />
<div>'''Articoli:<br />
<br />
'''''Theory and Evaluation of HRI (Jean Sholtz)<br />
<br />
<ul><br />
<li>Ruoli umani nell'interazione con il robot</li><br />
<li>Adattamento del modello di HCI di Norman per la HRI</li><br />
<li>focalizzare l'attenzione sul Bystander Role</li><br />
<li>"research on emotion and social interaction with robot"</li><br />
</ul><br />
<br />
<br />
'''''Coordination of verbal and non-verbal actions in human–robot interaction at museums and exhibitions (Yamazaki)<br />
<ul><br />
<li>Studio sull'importanza della compresenza di azioni verbali e non nei robot destinati ad utilizzi sociali</li><br />
<li>É stata qui focalizzata l'attenzione sul movimento della testa in coordinazione a quanto detto o spiegato dal robot</li><br />
</ul><br />
<br />
<br />
'''''The Role of Expressiveness and Attention in Human-Robot Interaction (A. Bruce)<br />
<ul><br />
<li>Utilizzo di un robot (RWI B21) la cui faccia é rappresentata su uno schermo ricorrenndo all'animazione 3D</li><br />
<li>Lo studio delle espressioni si é basato sull'utilizzo del codice delle espressioni facciali elaborato da Dalsarte</li><br />
<li>Le espressioni del robot variano in base a come l'essere umano reagisce alle indicazioni dategli dal robot stesso; si é osservato tuttavia che spesso la gente rimane perplessa e offesa per le espressioni di disappunto mostrate dal robot.</li><br />
</ul><br />
<br />
<br />
'''''Museum Guide Robot Based on Sociological Interaction Analysis (Y. Kuno) AND Coordination of verbal and non-verbal actions in HRI at museums and exhibitions (Yamazaki et al)<br />
<ul><br />
<li>Si focalizza l'attenzione sull'importanza del movimento della testa e dell'orientamento dello sguardo del robot.</li><br />
<li>Analisi dei movimenti della testa (ed eventualmente lo sguardo) di chi ascolta per decifrare il grado di interesse.</li><br />
<li>Il moto della testa e degli occhi del robot aumenta l'attenzione di chi ascolta</li><br />
</ul><br />
<br />
<br />
'''''How May I Serve You? A Robot Companion Approaching a Seated Person in a Helping Context (K. Dautenhahn)<br />
<ul><br />
<li>Vengono presentati i risultati di alcuni test circa varie metodologie di approccio che il robot può seguire per raggiungere una persona (frontale, da destra, da sinistra)</li><br />
</ul><br />
<br />
<br />
<br />
'''''Adaptive Human-Aware Robot Navigation in Close Proximity to Humans (2011, Mikael Svenstrup)<br />
<ul><br />
<li>A new method for pose estimation of a human by using laser rangefinder measurements.</li><br />
<li>Learning human behaviour using motion patterns and Case‐Based Reasoning (CBR)</li><br />
<li>A human‐aware navigation algorithm based on a potential field.</li><br />
</ul><br />
<br />
----<br />
<br />
'''Da Scaricare tramite rete Poli<br />
<ul><br />
<li>Use of nonverbal speech cues in social interaction between human and robot: emotional and interactional markers</li><br />
<br />
<li>Precision timing in human-robot interaction: coordination of head movement and utterance. (Yamazaki, Akiko et al. - 2008)</li><br />
</ul></div>CristianMandellihttps://airwiki.elet.polimi.it/index.php?title=Talk:Human-Robot_Gestual_Interaction&diff=13381Talk:Human-Robot Gestual Interaction2011-07-17T08:40:54Z<p>CristianMandelli: </p>
<hr />
<div>'''Articoli:<br />
<br />
'''''Theory and Evaluation of HRI (Jean Sholtz)<br />
<br />
<ul><br />
<li>Ruoli umani nell'interazione con il robot</li><br />
<li>Adattamento del modello di HCI di Norman per la HRI</li><br />
<li>focalizzare l'attenzione sul Bystander Role</li><br />
<li>"research on emotion and social interaction with robot"</li><br />
</ul><br />
<br />
<br />
'''''Coordination of verbal and non-verbal actions in human–robot interaction at museums and exhibitions (Yamazaki)<br />
<ul><br />
<li>Studio sull'importanza della compresenza di azioni verbali e non nei robot destinati ad utilizzi sociali</li><br />
<li>É stata qui focalizzata l'attenzione sul movimento della testa in coordinazione a quanto detto o spiegato dal robot</li><br />
</ul><br />
<br />
<br />
'''''The Role of Expressiveness and Attention in Human-Robot Interaction (A. Bruce)<br />
<ul><br />
<li>Utilizzo di un robot (RWI B21) la cui faccia é rappresentata su uno schermo ricorrenndo all'animazione 3D</li><br />
<li>Lo studio delle espressioni si é basato sull'utilizzo del codice delle espressioni facciali elaborato da Dalsarte</li><br />
<li>Le espressioni del robot variano in base a come l'essere umano reagisce alle indicazioni dategli dal robot stesso; si é osservato tuttavia che spesso la gente rimane perplessa e offesa per le espressioni di disappunto mostrate dal robot.</li><br />
</ul><br />
<br />
<br />
''''''Museum Guide Robot Based on Sociological Interaction Analysis (Y. Kuno) AND Coordination of verbal and non-verbal actions in HRI at museums and exhibitions (Yamazaki et al)<br />
<ul><br />
<li>Si focalizza l'attenzione sull'importanza del movimento della testa e dell'orientamento dello sguardo del robot.</li><br />
<li>Analisi dei movimenti della testa (ed eventualmente lo sguardo) di chi ascolta per decifrare il grado di interesse.</li><br />
<li>Il moto della testa e degli occhi del robot aumenta l'attenzione di chi ascolta</li><br />
</ul><br />
<br />
<br />
'''''How May I Serve You? A Robot Companion Approaching a Seated Person in a Helping Context (K. Dautenhahn)<br />
<ul><br />
<li>Vengono presentati i risultati di alcuni test circa varie metodologie di approccio che il robot può seguire per raggiungere una persona (frontale, da destra, da sinistra)</li><br />
</ul><br />
<br />
<br />
<br />
''''Adaptive Human-Aware Robot Navigation in Close Proximity to Humans (2011, Mikael Svenstrup)<br />
<ul><br />
<li>A new method for pose estimation of a human by using laser rangefinder measurements.</li><br />
<li>Learning human behaviour using motion patterns and Case‐Based Reasoning (CBR)</li><br />
<li>A human‐aware navigation algorithm based on a potential field.</li><br />
</ul><br />
<br />
----<br />
<br />
'''Da Scaricare tramite rete Poli<br />
<ul><br />
<li>Use of nonverbal speech cues in social interaction between human and robot: emotional and interactional markers</li><br />
<br />
<li>Precision timing in human-robot interaction: coordination of head movement and utterance. (Yamazaki, Akiko et al. - 2008)</li><br />
</ul></div>CristianMandellihttps://airwiki.elet.polimi.it/index.php?title=Talk:Human-Robot_Gestual_Interaction&diff=13380Talk:Human-Robot Gestual Interaction2011-07-17T08:40:39Z<p>CristianMandelli: </p>
<hr />
<div>'''Articoli:<br />
<br />
'''''Theory and Evaluation of HRI (Jean Sholtz)<br />
<br />
<ul><br />
<li>Ruoli umani nell'interazione con il robot</li><br />
<li>Adattamento del modello di HCI di Norman per la HRI</li><br />
<li>focalizzare l'attenzione sul Bystander Role</li><br />
<li>"research on emotion and social interaction with robot"</li><br />
</ul><br />
<br />
<br />
'''''Coordination of verbal and non-verbal actions in human–robot interaction at museums and exhibitions (Yamazaki)<br />
<ul><br />
<li>Studio sull'importanza della compresenza di azioni verbali e non nei robot destinati ad utilizzi sociali</li><br />
<li>É stata qui focalizzata l'attenzione sul movimento della testa in coordinazione a quanto detto o spiegato dal robot</li><br />
</ul><br />
<br />
<br />
'''''The Role of Expressiveness and Attention in Human-Robot Interaction (A. Bruce)<br />
<ul><br />
<li>Utilizzo di un robot (RWI B21) la cui faccia é rappresentata su uno schermo ricorrenndo all'animazione 3D</li><br />
<li>Lo studio delle espressioni si é basato sull'utilizzo del codice delle espressioni facciali elaborato da Dalsarte</li><br />
<li>Le espressioni del robot variano in base a come l'essere umano reagisce alle indicazioni dategli dal robot stesso; si é osservato tuttavia che spesso la gente rimane perplessa e offesa per le espressioni di disappunto mostrate dal robot.</li><br />
</ul><br />
<br />
<br />
'''''Museum Guide Robot Based on Sociological Interaction Analysis (Y. Kuno) AND Coordination of verbal and non-verbal actions in HRI at museums and exhibitions (Yamazaki et al)<br />
<ul><br />
<li>Si focalizza l'attenzione sull'importanza del movimento della testa e dell'orientamento dello sguardo del robot.</li><br />
<li>Analisi dei movimenti della testa (ed eventualmente lo sguardo) di chi ascolta per decifrare il grado di interesse.</li><br />
<li>Il moto della testa e degli occhi del robot aumenta l'attenzione di chi ascolta</li><br />
</ul><br />
<br />
<br />
''''How May I Serve You? A Robot Companion Approaching a Seated Person in a Helping Context (K. Dautenhahn)<br />
<ul><br />
<li>Vengono presentati i risultati di alcuni test circa varie metodologie di approccio che il robot può seguire per raggiungere una persona (frontale, da destra, da sinistra)</li><br />
</ul><br />
<br />
<br />
<br />
''''Adaptive Human-Aware Robot Navigation in Close Proximity to Humans (2011, Mikael Svenstrup)<br />
<ul><br />
<li>A new method for pose estimation of a human by using laser rangefinder measurements.</li><br />
<li>Learning human behaviour using motion patterns and Case‐Based Reasoning (CBR)</li><br />
<li>A human‐aware navigation algorithm based on a potential field.</li><br />
</ul><br />
<br />
----<br />
<br />
'''Da Scaricare tramite rete Poli<br />
<ul><br />
<li>Use of nonverbal speech cues in social interaction between human and robot: emotional and interactional markers</li><br />
<br />
<li>Precision timing in human-robot interaction: coordination of head movement and utterance. (Yamazaki, Akiko et al. - 2008)</li><br />
</ul></div>CristianMandellihttps://airwiki.elet.polimi.it/index.php?title=Talk:Human-Robot_Gestual_Interaction&diff=13379Talk:Human-Robot Gestual Interaction2011-07-17T08:25:09Z<p>CristianMandelli: </p>
<hr />
<div>'''Articoli:<br />
<br />
'''''Theory and Evaluation of HRI (Jean Sholtz)<br />
<br />
<ul><br />
<li>Ruoli umani nell'interazione con il robot</li><br />
<li>Adattamento del modello di HCI di Norman per la HRI</li><br />
<li>focalizzare l'attenzione sul Bystander Role</li><br />
<li>"research on emotion and social interaction with robot"</li><br />
</ul><br />
<br />
<br />
'''''Coordination of verbal and non-verbal actions in human–robot interaction at museums and exhibitions (Yamazaki)<br />
<ul><br />
<li>Studio sull'importanza della compresenza di azioni verbali e non nei robot destinati ad utilizzi sociali</li><br />
<li>É stata qui focalizzata l'attenzione sul movimento della testa in coordinazione a quanto detto o spiegato dal robot</li><br />
</ul><br />
<br />
<br />
'''''The Role of Expressiveness and Attention in Human-Robot Interaction (A. Bruce)<br />
<ul><br />
<li>Utilizzo di un robot (RWI B21) la cui faccia é rappresentata su uno schermo ricorrenndo all'animazione 3D</li><br />
<li>Lo studio delle espressioni si é basato sull'utilizzo del codice delle espressioni facciali elaborato da Dalsarte</li><br />
<li>Le espressioni del robot variano in base a come l'essere umano reagisce alle indicazioni dategli dal robot stesso; si é osservato tuttavia che spesso la gente rimane perplessa e offesa per le espressioni di disappunto mostrate dal robot.</li><br />
</ul><br />
<br />
<br />
'''''Museum Guide Robot Based on Sociological Interaction Analysis (Y. Kuno) AND Coordination of verbal and non-verbal actions in HRI at museums and exhibitions (Yamazaki et al)<br />
<ul><br />
<li>Si focalizza l'attenzione sull'importanza del movimento della testa e dell'orientamento dello sguardo del robot.</li><br />
<li>Analisi dei movimenti della testa (ed eventualmente lo sguardo) di chi ascolta per decifrare il grado di interesse.</li><br />
<li>Il moto della testa e degli occhi del robot aumenta l'attenzione di chi ascolta</li><br />
</ul><br />
<br />
<br />
''''How May I Serve You? A Robot Companion Approaching a Seated Person in a Helping Context (K. Dautenhahn)<br />
<ul><br />
<li>Vengono presentati i risultati di alcuni test circa varie metodologie di approccio che il robot può seguire per raggiungere una persona (frontale, da destra, da sinistra)</li><br />
</ul><br />
<br />
<br />
----<br />
<br />
'''Da Scaricare tramite rete Poli<br />
<ul><br />
<li>Use of nonverbal speech cues in social interaction between human and robot: emotional and interactional markers</li><br />
<br />
<li>Precision timing in human-robot interaction: coordination of head movement and utterance. (Yamazaki, Akiko et al. - 2008)</li><br />
</ul></div>CristianMandellihttps://airwiki.elet.polimi.it/index.php?title=Talk:Human-Robot_Gestual_Interaction&diff=13378Talk:Human-Robot Gestual Interaction2011-07-17T08:09:37Z<p>CristianMandelli: </p>
<hr />
<div>'''Articoli:<br />
<br />
'''''Theory and Evaluation of HRI (Jean Sholtz)<br />
<br />
<ul><br />
<li>Ruoli umani nell'interazione con il robot</li><br />
<li>Adattamento del modello di HCI di Norman per la HRI</li><br />
<li>focalizzare l'attenzione sul Bystander Role</li><br />
<li>"research on emotion and social interaction with robot"</li><br />
</ul><br />
<br />
<br />
'''''Coordination of verbal and non-verbal actions in human–robot interaction at museums and exhibitions (Yamazaki)<br />
<ul><br />
<li>Studio sull'importanza della compresenza di azioni verbali e non nei robot destinati ad utilizzi sociali</li><br />
<li>É stata qui focalizzata l'attenzione sul movimento della testa in coordinazione a quanto detto o spiegato dal robot</li><br />
</ul><br />
<br />
<br />
'''''The Role of Expressiveness and Attention in Human-Robot Interaction (A. Bruce)<br />
<ul><br />
<li>Utilizzo di un robot (RWI B21) la cui faccia é rappresentata su uno schermo ricorrenndo all'animazione 3D</li><br />
<li>Lo studio delle espressioni si é basato sull'utilizzo del codice delle espressioni facciali elaborato da Dalsarte</li><br />
<li>Le espressioni del robot variano in base a come l'essere umano reagisce alle indicazioni dategli dal robot stesso; si é osservato tuttavia che spesso la gente rimane perplessa e offesa per le espressioni di disappunto mostrate dal robot.</li><br />
</ul><br />
<br />
<br />
'''''Museum Guide Robot Based on Sociological Interaction Analysis (Y. Kuno) AND Coordination of verbal and non-verbal actions in HRI at museums and exhibitions (Yamazaki et al)<br />
<ul><br />
<li>Si focalizza l'attenzione sull'importanza del movimento della testa e dell'orientamento dello sguardo del robot.</li><br />
<li>Analisi dei movimenti della testa (ed eventualmente lo sguardo) di chi ascolta per decifrare il grado di interesse.</li><br />
<li>Il moto della testa e degli occhi del robot aumenta l'attenzione di chi ascolta</li><br />
</ul><br />
<br />
<br />
----<br />
<br />
'''Da Scaricare tramite rete Poli<br />
<ul><br />
<li>Use of nonverbal speech cues in social interaction between human and robot: emotional and interactional markers</li><br />
<br />
<li>Precision timing in human-robot interaction: coordination of head movement and utterance. (Yamazaki, Akiko et al. - 2008)</li><br />
</ul></div>CristianMandellihttps://airwiki.elet.polimi.it/index.php?title=Talk:Human-Robot_Gestual_Interaction&diff=13377Talk:Human-Robot Gestual Interaction2011-07-17T08:06:25Z<p>CristianMandelli: </p>
<hr />
<div>'''Articoli:<br />
<br />
'''''Theory and Evaluation of HRI (Jean Sholtz)<br />
<br />
<ul><br />
<li>Ruoli umani nell'interazione con il robot</li><br />
<li>Adattamento del modello di HCI di Norman per la HRI</li><br />
<li>focalizzare l'attenzione sul Bystander Role</li><br />
<li>"research on emotion and social interaction with robot"</li><br />
</ul><br />
<br />
<br />
'''''Coordination of verbal and non-verbal actions in human–robot interaction at museums and exhibitions (Yamazaki)<br />
<ul><br />
<li>Studio sull'importanza della compresenza di azioni verbali e non nei robot destinati ad utilizzi sociali</li><br />
<li>É stata qui focalizzata l'attenzione sul movimento della testa in coordinazione a quanto detto o spiegato dal robot</li><br />
</ul><br />
<br />
<br />
'''''The Role of Expressiveness and Attention in Human-Robot Interaction (A. Bruce)<br />
<ul><br />
<li>Utilizzo di un robot (RWI B21) la cui faccia é rappresentata su uno schermo ricorrenndo all'animazione 3D</li><br />
<li>Lo studio delle espressioni si é basato sull'utilizzo del codice delle espressioni facciali elaborato da Dalsarte</li><br />
<li>Le espressioni del robot variano in base a come l'essere umano reagisce alle indicazioni dategli dal robot stesso; si é osservato tuttavia che spesso la gente rimane perplessa e offesa per le espressioni di disappunto mostrate dal robot.</li><br />
</ul><br />
<br />
<br />
'''''Museum Guide Robot Based on Sociological Interaction Analysis<br />
<ul><br />
<li>Si focalizza l'attenzione sull'importanza del movimento della testa e dell'orientamento dello sguardo del robot.</li><br />
<li>Analisi dei movimenti della testa (ed eventualmente lo sguardo) di chi ascolta per decifrare il grado di interesse.</li><br />
<li>Il moto della testa e degli occhi del robot aumenta l'attenzione di chi ascolta</li><br />
</ul><br />
<br />
<br />
----<br />
<br />
'''Da Scaricare tramite rete Poli<br />
<ul><br />
<li>Use of nonverbal speech cues in social interaction between human and robot: emotional and interactional markers</li><br />
<br />
<li>Precision timing in human-robot interaction: coordination of head movement and utterance. (Yamazaki, Akiko et al. - 2008)</li><br />
</ul></div>CristianMandellihttps://airwiki.elet.polimi.it/index.php?title=Talk:Human-Robot_Gestual_Interaction&diff=13376Talk:Human-Robot Gestual Interaction2011-07-17T08:06:14Z<p>CristianMandelli: </p>
<hr />
<div>'''Articoli:<br />
<br />
'''''Theory and Evaluation of HRI (Jean Sholtz)<br />
<br />
<ul><br />
<li>Ruoli umani nell'interazione con il robot</li><br />
<li>Adattamento del modello di HCI di Norman per la HRI</li><br />
<li>focalizzare l'attenzione sul Bystander Role</li><br />
<li>"research on emotion and social interaction with robot"</li><br />
</ul><br />
<br />
<br />
'''''Coordination of verbal and non-verbal actions in human–robot interaction at museums and exhibitions (Yamazaki)<br />
<ul><br />
<li>Studio sull'importanza della compresenza di azioni verbali e non nei robot destinati ad utilizzi sociali</li><br />
<li>É stata qui focalizzata l'attenzione sul movimento della testa in coordinazione a quanto detto o spiegato dal robot</li><br />
</ul><br />
<br />
<br />
'''''The Role of Expressiveness and Attention in Human-Robot Interaction (A. Bruce)<br />
<ul><br />
<li>Utilizzo di un robot (RWI B21) la cui faccia é rappresentata su uno schermo ricorrenndo all'animazione 3D</li><br />
<li>Lo studio delle espressioni si é basato sull'utilizzo del codice delle espressioni facciali elaborato da Dalsarte</li><br />
<li>Le espressioni del robot variano in base a come l'essere umano reagisce alle indicazioni dategli dal robot stesso; si é osservato tuttavia che spesso la gente rimane perplessa e offesa per le espressioni di disappunto mostrate dal robot.</li><br />
</ul><br />
<br />
'''''Museum Guide Robot Based on Sociological Interaction Analysis<br />
<ul><br />
<li>Si focalizza l'attenzione sull'importanza del movimento della testa e dell'orientamento dello sguardo del robot.</li><br />
<li>Analisi dei movimenti della testa (ed eventualmente lo sguardo) di chi ascolta per decifrare il grado di interesse.</li><br />
<li>Il moto della testa e degli occhi del robot aumenta l'attenzione di chi ascolta</li><br />
</ul><br />
<br />
<br />
----<br />
<br />
'''Da Scaricare tramite rete Poli<br />
<ul><br />
<li>Use of nonverbal speech cues in social interaction between human and robot: emotional and interactional markers</li><br />
<br />
<li>Precision timing in human-robot interaction: coordination of head movement and utterance. (Yamazaki, Akiko et al. - 2008)</li><br />
</ul></div>CristianMandellihttps://airwiki.elet.polimi.it/index.php?title=Talk:Human-Robot_Gestual_Interaction&diff=13375Talk:Human-Robot Gestual Interaction2011-07-17T07:49:05Z<p>CristianMandelli: </p>
<hr />
<div>'''Articoli:<br />
<br />
'''''Theory and Evaluation of HRI (Jean Sholtz)<br />
<br />
<ul><br />
<li>Ruoli umani nell'interazione con il robot</li><br />
<li>Adattamento del modello di HCI di Norman per la HRI</li><br />
<li>focalizzare l'attenzione sul Bystander Role</li><br />
<li>"research on emotion and social interaction with robot"</li><br />
</ul><br />
<br />
<br />
'''''Coordination of verbal and non-verbal actions in human–robot interaction at museums and exhibitions (Yamazaki)<br />
<ul><br />
<li>Studio sull'importanza della compresenza di azioni verbali e non nei robot destinati ad utilizzi sociali</li><br />
<li>É stata qui focalizzata l'attenzione sul movimento della testa in coordinazione a quanto detto o spiegato dal robot</li><br />
</ul><br />
<br />
<br />
'''''The Role of Expressiveness and Attention in Human-Robot Interaction (A. Bruce)<br />
<ul><br />
<li>Utilizzo di un robot (RWI B21) la cui faccia é rappresentata su uno schermo ricorrenndo all'animazione 3D</li><br />
<li>Lo studio delle espressioni si é basato sull'utilizzo del codice delle espressioni facciali elaborato da Dalsarte</li><br />
<li>Le espressioni del robot variano in base a come l'essere umano reagisce alle indicazioni dategli dal robot stesso; si é osservato tuttavia che spesso la gente rimane perplessa e offesa per le espressioni di disappunto mostrate dal robot.</li><br />
</ul><br />
<br />
<br />
<br />
<br />
----<br />
<br />
'''Da Scaricare tramite rete Poli<br />
<ul><br />
<li>Use of nonverbal speech cues in social interaction between human and robot: emotional and interactional markers</li><br />
<br />
<li>Precision timing in human-robot interaction: coordination of head movement and utterance. (Yamazaki, Akiko et al. - 2008)</li><br />
</ul></div>CristianMandellihttps://airwiki.elet.polimi.it/index.php?title=Talk:Human-Robot_Gestual_Interaction&diff=13374Talk:Human-Robot Gestual Interaction2011-07-17T07:48:51Z<p>CristianMandelli: </p>
<hr />
<div>'''Articoli:<br />
'''''Theory and Evaluation of HRI (Jean Sholtz)<br />
<br />
<ul><br />
<li>Ruoli umani nell'interazione con il robot</li><br />
<li>Adattamento del modello di HCI di Norman per la HRI</li><br />
<li>focalizzare l'attenzione sul Bystander Role</li><br />
<li>"research on emotion and social interaction with robot"</li><br />
</ul><br />
<br />
<br />
'''''Coordination of verbal and non-verbal actions in human–robot interaction at museums and exhibitions (Yamazaki)<br />
<ul><br />
<li>Studio sull'importanza della compresenza di azioni verbali e non nei robot destinati ad utilizzi sociali</li><br />
<li>É stata qui focalizzata l'attenzione sul movimento della testa in coordinazione a quanto detto o spiegato dal robot</li><br />
</ul><br />
<br />
<br />
'''''The Role of Expressiveness and Attention in Human-Robot Interaction (A. Bruce)<br />
<ul><br />
<li>Utilizzo di un robot (RWI B21) la cui faccia é rappresentata su uno schermo ricorrenndo all'animazione 3D</li><br />
<li>Lo studio delle espressioni si é basato sull'utilizzo del codice delle espressioni facciali elaborato da Dalsarte</li><br />
<li>Le espressioni del robot variano in base a come l'essere umano reagisce alle indicazioni dategli dal robot stesso; si é osservato tuttavia che spesso la gente rimane perplessa e offesa per le espressioni di disappunto mostrate dal robot.</li><br />
</ul><br />
<br />
<br />
<br />
<br />
----<br />
<br />
'''Da Scaricare tramite rete Poli<br />
<ul><br />
<li>Use of nonverbal speech cues in social interaction between human and robot: emotional and interactional markers</li><br />
<br />
<li>Precision timing in human-robot interaction: coordination of head movement and utterance. (Yamazaki, Akiko et al. - 2008)</li><br />
</ul></div>CristianMandellihttps://airwiki.elet.polimi.it/index.php?title=Talk:Human-Robot_Gestual_Interaction&diff=13373Talk:Human-Robot Gestual Interaction2011-07-17T07:41:10Z<p>CristianMandelli: </p>
<hr />
<div>Articoli:<br />
<br />
'''Theory and Evaluation of HRI (Jean Sholtz)<br />
<ul><br />
<li>Ruoli umani nell'interazione con il robot</li><br />
<li>Adattamento del modello di HCI di Norman per la HRI</li><br />
<li>focalizzare l'attenzione sul Bystander Role</li><br />
<li>"research on emotion and social interaction with robot"</li><br />
</ul></div>CristianMandellihttps://airwiki.elet.polimi.it/index.php?title=Talk:Human-Robot_Gestual_Interaction&diff=13350Talk:Human-Robot Gestual Interaction2011-07-11T17:58:29Z<p>CristianMandelli: New page: Articoli:</p>
<hr />
<div>Articoli:</div>CristianMandellihttps://airwiki.elet.polimi.it/index.php?title=User:CristianMandelli&diff=12998User:CristianMandelli2011-03-23T10:30:53Z<p>CristianMandelli: </p>
<hr />
<div>{{Student<br />
|category=Student<br />
|firstname=Cristian<br />
|lastname=Mandelli<br />
|photo=MandelliCristian-photo.jpg<br />
|email=cristianmandelli@gmail.com<br />
|projectpage=RoboWII2.1<br />
|advisor=AndreaBonarini;<br />
|status=inactive<br />
}}</div>CristianMandellihttps://airwiki.elet.polimi.it/index.php?title=File:MandelliCristian-photo.jpg&diff=12997File:MandelliCristian-photo.jpg2011-03-23T10:29:55Z<p>CristianMandelli: profile image</p>
<hr />
<div>profile image</div>CristianMandellihttps://airwiki.elet.polimi.it/index.php?title=RoboWII2.1&diff=12608RoboWII2.12010-10-17T09:28:48Z<p>CristianMandelli: </p>
<hr />
<div>{{Project<br />
|title=RoboWII2.1<br />
|image=SkypeWII3Small.JPG<br />
|short_descr=Development of a Robogame based on Spykee and WIIMote<br />
|coordinator=AndreaBonarini<br />
|students=DiegoMereghetti; AlessandroMarin;DeborahZamponi; CristianMandelli;<br />
|resarea=Robotics<br />
|restopic=Robogames<br />
|start=2010/03/18<br />
|end=2010/10/15<br />
|status=Closed<br />
|level=Ms<br />
|type=Course<br />
}}<br />
This is a [[RobogameDesign|RobogameDesign]] project.<br />
<br />
This work is done as part of the [[ROBOWII|ROBOWII]] effort.<br />
<br />
A first part of the project has been developed by [[User:DiegoMereghetti|DiegoMereghetti]] e [[User:AlessandroMarin|Alessandro Marin]] in 2009/2010. In 2010 [[User:DeborahZamponi|Deborah Zamponi]] and[[User:CristianMandelli| Cristian Mandelli]] have completed the game for the HCI Lab Course at MS level.<br />
<br />
[[user:GiovanniCondello|Giovanni Condello]],[[user:NicolaCrovetti|Nicola Crovetti]],[[user:AndreaDiGiorgio|Andrea Di Giorgio]], have worked on a new set of [[GesturesForRoboWII|gestures]] to make the game more interesting and interactive.<br />
<br />
'''Project Documentation'''<br />
<br />
Game documentation [[Media:RoboWII 2.1.pdf|RoboWII 2.1.pdf]] - [[Media:RoboWii 2.1-ZamponiMandelli.pdf| RoboWii2.1 Mandelli_Zamponi]] (last edit: 15/10/2010)<br />
<br />
Presentation of the game [[Media:RoboWII 2.1 presentation.zip|RoboWII 2.1 presentation.zip]] (last edit: 07/02/2010)<br />
<br />
LaTeX sources of the documentation [[Media:RoboWII 2.1 latex src.zip|RoboWII 2.1 latex src.zip]] (last edit: 07/02/2010)<br />
<br />
Diagrams of FSMs of the game, for use with DIA (Diagram Editor) [[Media:RoboWII 2.1 diagrams.zip|RoboWII 2.1 diagrams.zip]] (last edit: 07/02/2010)<br />
<br />
Code of implemented features [[Media:RoboWII 2.1 code.zip|RoboWII 2.1 code.zip]] (last edit: 07/02/2010)<br />
<br />
Example of game moves [[Media:RoboWII 2.1 movements.zip|RoboWII 2.1 movements.zip]] (last edit: 07/02/2010)<br />
<br />
<br />
<br />
{{#ev:youtube|pCFMwl73MCE}}<br />
<br />
*[http://www.youtube.com/watch?v=pCFMwl73MCE External link]<br />
[[RoboWII2.1]]</div>CristianMandellihttps://airwiki.elet.polimi.it/index.php?title=RoboWII2.1&diff=12607RoboWII2.12010-10-17T09:28:07Z<p>CristianMandelli: </p>
<hr />
<div>{{Project<br />
|title=RoboWII2.1<br />
|image=SkypeWII3Small.JPG<br />
|short_descr=Development of a Robogame based on Spykee and WIIMote<br />
|coordinator=AndreaBonarini<br />
|students=DiegoMereghetti; AlessandroMarin;DeborahZamponi; CristianMandelli;<br />
|resarea=Robotics<br />
|restopic=Robogames<br />
|start=2010/03/18<br />
|end=2010/10/15<br />
|status=Closed<br />
|level=Ms<br />
|type=Course<br />
}}<br />
This is a [[RobogameDesign|RobogameDesign]] project.<br />
<br />
This work is done as part of the [[ROBOWII|ROBOWII]] effort.<br />
<br />
A first part of the project has been developed by [[User:DiegoMereghetti|DiegoMereghetti]] e [[User:AlessandroMarin|Alessandro Marin]] in 2009/2010. In 2010 [[User:DeborahZamponi|Deborah Zamponi]] and[[User:CristianMandelli| Cristian Mandelli]] have completed the game for the HCI Lab Course at MS level.<br />
<br />
[[user:GiovanniCondello|Giovanni Condello]],[[user:NicolaCrovetti|Nicola Crovetti]],[[user:AndreaDiGiorgio|Andrea Di Giorgio]], have worked on a new set of [[GesturesForRoboWII|gestures]] to make the game more interesting and interactive.<br />
<br />
'''Project Documentation'''<br />
<br />
Game documentation [[Media:RoboWII 2.1.pdf|RoboWII 2.1.pdf]] - [[Media:RoboWii 2.1-ZamponiMandelli.pdf| RoboWii2.1 Mandelli_Zamponi]] (last edit: 07/02/2010)<br />
<br />
Presentation of the game [[Media:RoboWII 2.1 presentation.zip|RoboWII 2.1 presentation.zip]] (last edit: 07/02/2010)<br />
<br />
LaTeX sources of the documentation [[Media:RoboWII 2.1 latex src.zip|RoboWII 2.1 latex src.zip]] (last edit: 07/02/2010)<br />
<br />
Diagrams of FSMs of the game, for use with DIA (Diagram Editor) [[Media:RoboWII 2.1 diagrams.zip|RoboWII 2.1 diagrams.zip]] (last edit: 07/02/2010)<br />
<br />
Code of implemented features [[Media:RoboWII 2.1 code.zip|RoboWII 2.1 code.zip]] (last edit: 07/02/2010)<br />
<br />
Example of game moves [[Media:RoboWII 2.1 movements.zip|RoboWII 2.1 movements.zip]] (last edit: 07/02/2010)<br />
<br />
<br />
<br />
{{#ev:youtube|pCFMwl73MCE}}<br />
<br />
*[http://www.youtube.com/watch?v=pCFMwl73MCE External link]<br />
[[RoboWII2.1]]</div>CristianMandellihttps://airwiki.elet.polimi.it/index.php?title=File:RoboWii_2.1-ZamponiMandelli.pdf&diff=12606File:RoboWii 2.1-ZamponiMandelli.pdf2010-10-17T09:26:50Z<p>CristianMandelli: </p>
<hr />
<div></div>CristianMandellihttps://airwiki.elet.polimi.it/index.php?title=File:CristianMandelli-Code-FaceAnalysis.zip&diff=11370File:CristianMandelli-Code-FaceAnalysis.zip2010-04-21T12:49:48Z<p>CristianMandelli: </p>
<hr />
<div></div>CristianMandellihttps://airwiki.elet.polimi.it/index.php?title=Interpretation_of_facial_expressions_and_movements_of_the_head&diff=11369Interpretation of facial expressions and movements of the head2010-04-21T12:49:20Z<p>CristianMandelli: /* Thesis */</p>
<hr />
<div>{{Project<br />
|title=Interpretation of facial expressions and movements of the head<br />
|coordinator=AndreaBonarini;<br />
|tutor=MatteoMatteucci<br />
|collaborator=SimoneTognetti<br />
|students=CristianMandelli<br />
|resarea=Affective Computing<br />
|restopic=Emotion from Interaction<br />
|start=2007/11/09<br />
|end=2008/11/09<br />
|status=Closed<br />
|level=Bs<br />
|type=Thesis<br />
}}<br />
=== Project description ===<br />
<br />
The objective of this project was the interpretation of facial expressions and movements of the head and upper part of the body.<br />
We reach this goal by developing a system that is able to video capture the movements of head, eyes and eyebrows. <br />
In order to reach the aforementioned goal, we used face detection and blob analysis algorithms; in order:<br />
<br />
* '''Face detection:''' algorithm for detecting a face in each video frame.<br />
<br />
* '''Blob Analysis:''' algorithm for eyes and eyebrown detection.<br />
<br />
The System works on a three-level analysis:<br />
<br />
1. '''1st level:''' At this level we work on frame analysis in order to extract only the face area of the image.<br />
<br />
2. '''2nd level:''' Only once face recognition has succeeded, eyes and eyebrows detection and extraction are applied<br />
<br />
3. '''3rd level:''' At this level, data elaboration and movement analysis take place.<br />
<br />
We used [[http://sourceforge.net/projects/opencvlibrary/ OpenCV]] library to develop the first and the second level of analysis. The Open Computer Vision Library has more than 500 algorithms, documentation and sample code for real time computer vision.<br />
<br />
A recognition process can be much more efficient if it is based on the detection of features that encode some information about the class to be detected. This is the case of Haar-like features that endode the existence of oriented contrasts between regions in the image. A set of these features can be used to encode the contrasts exhibited by a human face and their spacial relationships. Haar-like features are so called because they are computed similarly to the coefficients in Haar wavelet transforms.<br />
<br />
The object detector of OpenCV has been initially proposed by Paul Viola and improved by Rainer Lienhart. First, a classifier (namely a cascade of boosted classifiers working with haar-like features) is trained with a few hundreds of sample views of a particular object (i.e., a face or a car), called positive examples, that are scaled to the same size (say, 20x20), and negative examples - arbitrary images of the same size. <br />
<br />
On the third level we used a blob analysis algorithm developed by professor M. Matteucci. This algorithm allows (in our case) to detect dark regions in the face image which was extracted at level 2.<br />
<br />
=== Thesis ===<br />
<br />
Analisi di immagini per l'identificazione del volto e dei suoi movimenti [[Media:CristianMandelli-Thesis.pdf]]<br />
<br />
Face Detector and Blob detector code [[Media:CristianMandelli-Code-FaceAnalysis.zip]]<br />
<br />
=== Laboratory work and risk analysis ===<br />
<br />
Laboratory work for this project will be mainly performed at AIRLab/DEI and at home. Risks are related to the use of PC and camera.</div>CristianMandellihttps://airwiki.elet.polimi.it/index.php?title=Interpretation_of_facial_expressions_and_movements_of_the_head&diff=11368Interpretation of facial expressions and movements of the head2010-04-21T12:48:54Z<p>CristianMandelli: /* Thesis */</p>
<hr />
<div>{{Project<br />
|title=Interpretation of facial expressions and movements of the head<br />
|coordinator=AndreaBonarini;<br />
|tutor=MatteoMatteucci<br />
|collaborator=SimoneTognetti<br />
|students=CristianMandelli<br />
|resarea=Affective Computing<br />
|restopic=Emotion from Interaction<br />
|start=2007/11/09<br />
|end=2008/11/09<br />
|status=Closed<br />
|level=Bs<br />
|type=Thesis<br />
}}<br />
=== Project description ===<br />
<br />
The objective of this project was the interpretation of facial expressions and movements of the head and upper part of the body.<br />
We reach this goal by developing a system that is able to video capture the movements of head, eyes and eyebrows. <br />
In order to reach the aforementioned goal, we used face detection and blob analysis algorithms; in order:<br />
<br />
* '''Face detection:''' algorithm for detecting a face in each video frame.<br />
<br />
* '''Blob Analysis:''' algorithm for eyes and eyebrown detection.<br />
<br />
The System works on a three-level analysis:<br />
<br />
1. '''1st level:''' At this level we work on frame analysis in order to extract only the face area of the image.<br />
<br />
2. '''2nd level:''' Only once face recognition has succeeded, eyes and eyebrows detection and extraction are applied<br />
<br />
3. '''3rd level:''' At this level, data elaboration and movement analysis take place.<br />
<br />
We used [[http://sourceforge.net/projects/opencvlibrary/ OpenCV]] library to develop the first and the second level of analysis. The Open Computer Vision Library has more than 500 algorithms, documentation and sample code for real time computer vision.<br />
<br />
A recognition process can be much more efficient if it is based on the detection of features that encode some information about the class to be detected. This is the case of Haar-like features that endode the existence of oriented contrasts between regions in the image. A set of these features can be used to encode the contrasts exhibited by a human face and their spacial relationships. Haar-like features are so called because they are computed similarly to the coefficients in Haar wavelet transforms.<br />
<br />
The object detector of OpenCV has been initially proposed by Paul Viola and improved by Rainer Lienhart. First, a classifier (namely a cascade of boosted classifiers working with haar-like features) is trained with a few hundreds of sample views of a particular object (i.e., a face or a car), called positive examples, that are scaled to the same size (say, 20x20), and negative examples - arbitrary images of the same size. <br />
<br />
On the third level we used a blob analysis algorithm developed by professor M. Matteucci. This algorithm allows (in our case) to detect dark regions in the face image which was extracted at level 2.<br />
<br />
=== Thesis ===<br />
<br />
Analisi di immagini per l'identificazione del volto e dei suoi movimenti [[Media:CristianMandelli-Thesis.pdf]]<br />
Face Detector and Blob detector code [[Media:CristianMandelli-Code-FaceAnalysis.zip]]<br />
<br />
=== Laboratory work and risk analysis ===<br />
<br />
Laboratory work for this project will be mainly performed at AIRLab/DEI and at home. Risks are related to the use of PC and camera.</div>CristianMandellihttps://airwiki.elet.polimi.it/index.php?title=Interpretation_of_facial_expressions_and_movements_of_the_head&diff=7081Interpretation of facial expressions and movements of the head2009-06-10T09:24:09Z<p>CristianMandelli: /* Thesis Title */</p>
<hr />
<div>== '''Project profile''' ==<br />
<br />
=== Thesis Title ===<br />
<br />
Analisi di immagini per l'identificazione del volto e dei suoi movimenti [[Media:CristianMandelli-Thesis.pdf]]<br />
<br />
=== Project short description ===<br />
<br />
We want to develop a strong face analysis algorithm which can be integrated with emotion recognition algorithm during driving.<br />
<br />
This system can recognize high stress level and therefore modify the car's behaviour. We reach this goal by developing a system that is able to video capture the movements of head, eyes and eyebrows. <br />
In order to reach the aforementioned goal, we used face detection and blob analysis algorithms; in order:<br />
<br />
* '''Face detection:''' algorithm for detecting a face in each video frame.<br />
<br />
* '''Blob Analysis:''' algorithm for eyes and eyebrown detection.<br />
<br />
<br />
The System works on a three-level analysis:<br />
<br />
1. '''1st level:''' At this level we work on frame analysis in order to extract only the face area of the image.<br />
<br />
2. '''2nd level:''' Only once face recognition has succeeded, eyes and eyebrows detection and extraction are applied<br />
<br />
3. '''3rd level:''' At this level, data elaboration and movement analysis take place.<br />
<br />
<br />
<br />
We used OpenCV library to develop the first and the second level of analysis. The Open Computer Vision Library has more than 500 algorithms, documentation and sample code for real time computer vision.<br />
<br />
A recognition process can be much more efficient if it is based on the detection of features that encode some information about the class to be detected. This is the case of Haar-like features that endode the existence of oriented contrasts between regions in the image. A set of these features can be used to encode the contrasts exhibited by a human face and their spacial relationships. Haar-like features are so called because they are computed similarly to the coefficients in Haar wavelet transforms.<br />
<br />
The object detector of OpenCV has been initially proposed by Paul Viola and improved by Rainer Lienhart. First, a classifier (namely a cascade of boosted classifiers working with haar-like features) is trained with a few hundreds of sample views of a particular object (i.e., a face or a car), called positive examples, that are scaled to the same size (say, 20x20), and negative examples - arbitrary images of the same size. <br />
<br />
On the third level we used a blob analysis algorithm developed by professor M. Matteucci. This algorithm allows (in our case) to detect dark regions in the face image which was extracted at level 2.</div>CristianMandellihttps://airwiki.elet.polimi.it/index.php?title=File:CristianMandelli-Thesis.pdf&diff=7080File:CristianMandelli-Thesis.pdf2009-06-10T09:23:30Z<p>CristianMandelli: </p>
<hr />
<div></div>CristianMandellihttps://airwiki.elet.polimi.it/index.php?title=Interpretation_of_facial_expressions_and_movements_of_the_head&diff=7078Interpretation of facial expressions and movements of the head2009-06-10T09:19:08Z<p>CristianMandelli: New page: == '''Project profile''' == === Thesis Title === Analisi di immagini per l'identificazione del volto e dei suoi movimenti === Project short description === We want to develop a strong ...</p>
<hr />
<div>== '''Project profile''' ==<br />
<br />
=== Thesis Title ===<br />
<br />
Analisi di immagini per l'identificazione del volto e dei suoi movimenti<br />
<br />
=== Project short description ===<br />
<br />
We want to develop a strong face analysis algorithm which can be integrated with emotion recognition algorithm during driving.<br />
<br />
This system can recognize high stress level and therefore modify the car's behaviour. We reach this goal by developing a system that is able to video capture the movements of head, eyes and eyebrows. <br />
In order to reach the aforementioned goal, we used face detection and blob analysis algorithms; in order:<br />
<br />
* '''Face detection:''' algorithm for detecting a face in each video frame.<br />
<br />
* '''Blob Analysis:''' algorithm for eyes and eyebrown detection.<br />
<br />
<br />
The System works on a three-level analysis:<br />
<br />
1. '''1st level:''' At this level we work on frame analysis in order to extract only the face area of the image.<br />
<br />
2. '''2nd level:''' Only once face recognition has succeeded, eyes and eyebrows detection and extraction are applied<br />
<br />
3. '''3rd level:''' At this level, data elaboration and movement analysis take place.<br />
<br />
<br />
<br />
We used OpenCV library to develop the first and the second level of analysis. The Open Computer Vision Library has more than 500 algorithms, documentation and sample code for real time computer vision.<br />
<br />
A recognition process can be much more efficient if it is based on the detection of features that encode some information about the class to be detected. This is the case of Haar-like features that endode the existence of oriented contrasts between regions in the image. A set of these features can be used to encode the contrasts exhibited by a human face and their spacial relationships. Haar-like features are so called because they are computed similarly to the coefficients in Haar wavelet transforms.<br />
<br />
The object detector of OpenCV has been initially proposed by Paul Viola and improved by Rainer Lienhart. First, a classifier (namely a cascade of boosted classifiers working with haar-like features) is trained with a few hundreds of sample views of a particular object (i.e., a face or a car), called positive examples, that are scaled to the same size (say, 20x20), and negative examples - arbitrary images of the same size. <br />
<br />
On the third level we used a blob analysis algorithm developed by professor M. Matteucci. This algorithm allows (in our case) to detect dark regions in the face image which was extracted at level 2.</div>CristianMandellihttps://airwiki.elet.polimi.it/index.php?title=Driving_companions&diff=7077Driving companions2009-06-10T09:18:40Z<p>CristianMandelli: /* Project description */</p>
<hr />
<div>== '''Project profile''' ==<br />
<br />
=== Project name ===<br />
<br />
Driving companions<br />
<br />
=== Project short description ===<br />
<br />
<br />
The objective of this project is to develop a framework (hw/sw) to be used to recognize emotion while driving. In Automotive field there is an increasing interest for more trustable and confortable cars. For this reason we want to develop a system that is able to recognize emotion in a car environment. The purpose could be to advise in the case of high stress or to control the car to make the driver feel better.<br />
<br />
=== Dates ===<br />
Start date: 2007/11/09<br />
<br />
End date: 2008/11/09<br />
<br />
=== Internet site(s) ===<br />
<br />
=== People involved ===<br />
<br />
==== Project leaders ====<br />
<br />
* [[User:AndreaBonarini|Andrea Bonarini]]<br />
* [[User:MatteoMatteucci|Matteo Matteucci]]<br />
<br />
==== Other Politecnico di Milano people ====<br />
<br />
* [[User:SimoneTognetti|Simone Tognetti]]<br />
<br />
==== Students ====<br />
'''Students currently working on the project'''<br />
<br />
* [[User:PamelaGotti|Pamela Gotti]]<br />
* [[User:CristianMandelli|Cristian Mandelli]]<br />
<br />
=== Laboratory work and risk analysis ===<br />
<br />
Laboratory work for this project will be mainly performed at AIRLab/Lambrate. It will include electrical and electronic activity. Potentially risky activities are the following:<br />
* Use of soldering iron. Standard safety measures described in [http://airlab.elet.polimi.it/index.php/airlab/content/download/461/4110/file/documento_valutazione_rischi_AIRLab.pdf Safety norms] will be followed.<br />
* Use of high-voltage circuits. Special gloves and a current limiter will be used.<br />
* Robot testing. Standard safety measures described in [http://airlab.elet.polimi.it/index.php/airlab/content/download/461/4110/file/documento_valutazione_rischi_AIRLab.pdf Safety norms] will be followed.<br />
* Use of a modified (human-guided) golf cart. We will use the cart only in open-air environments.<br />
<br />
== '''Project description''' ==<br />
The project develops on parallel lines.<br />
<br />
Pamela Gotti is working on the analysis of biological signals and is considering other "natural" sources for signals, such as sensors on the steering wheel and the chair.<br />
<br />
Cristian Mandelli is working on the interpretation of facial expressions and movements of the head and upper part of the body. [[Interpretation of facial expressions and movements of the head|Details]]</div>CristianMandellihttps://airwiki.elet.polimi.it/index.php?title=Driving_companions&diff=7075Driving companions2009-06-10T08:46:10Z<p>CristianMandelli: /* Project description */</p>
<hr />
<div>== '''Project profile''' ==<br />
<br />
=== Project name ===<br />
<br />
Driving companions<br />
<br />
=== Project short description ===<br />
<br />
<br />
The objective of this project is to develop a framework (hw/sw) to be used to recognize emotion while driving. In Automotive field there is an increasing interest for more trustable and confortable cars. For this reason we want to develop a system that is able to recognize emotion in a car environment. The purpose could be to advise in the case of high stress or to control the car to make the driver feel better.<br />
<br />
=== Dates ===<br />
Start date: 2007/11/09<br />
<br />
End date: 2008/11/09<br />
<br />
=== Internet site(s) ===<br />
<br />
=== People involved ===<br />
<br />
==== Project leaders ====<br />
<br />
* [[User:AndreaBonarini|Andrea Bonarini]]<br />
* [[User:MatteoMatteucci|Matteo Matteucci]]<br />
<br />
==== Other Politecnico di Milano people ====<br />
<br />
* [[User:SimoneTognetti|Simone Tognetti]]<br />
<br />
==== Students ====<br />
'''Students currently working on the project'''<br />
<br />
* [[User:PamelaGotti|Pamela Gotti]]<br />
* [[User:CristianMandelli|Cristian Mandelli]]<br />
<br />
=== Laboratory work and risk analysis ===<br />
<br />
Laboratory work for this project will be mainly performed at AIRLab/Lambrate. It will include electrical and electronic activity. Potentially risky activities are the following:<br />
* Use of soldering iron. Standard safety measures described in [http://airlab.elet.polimi.it/index.php/airlab/content/download/461/4110/file/documento_valutazione_rischi_AIRLab.pdf Safety norms] will be followed.<br />
* Use of high-voltage circuits. Special gloves and a current limiter will be used.<br />
* Robot testing. Standard safety measures described in [http://airlab.elet.polimi.it/index.php/airlab/content/download/461/4110/file/documento_valutazione_rischi_AIRLab.pdf Safety norms] will be followed.<br />
* Use of a modified (human-guided) golf cart. We will use the cart only in open-air environments.<br />
<br />
== '''Project description''' ==<br />
The project develops on parallel lines.<br />
<br />
Pamela Gotti is working on the analysis of biological signals and is considering other "natural" sources for signals, such as sensors on the steering wheel and the chair.<br />
<br />
Cristian Mandelli is working on the interpretation of facial expressions and movements of the head and upper part of the body. [[Details]]</div>CristianMandellihttps://airwiki.elet.polimi.it/index.php?title=Driving_companions&diff=5252Driving companions2009-02-21T10:56:30Z<p>CristianMandelli: /* Project description */</p>
<hr />
<div>== '''Project profile''' ==<br />
<br />
=== Project name ===<br />
<br />
Driving companions<br />
<br />
=== Project short description ===<br />
<br />
<br />
The objective of this project is to develop a framework (hw/sw) to be used to recognize emotion while driving. In Automotive field there is an increasing interest for more trustable and confortable cars. For this reason we want to develop a system that is able to recognize emotion in a car environment. The purpose could be to advise in the case of high stress or to control the car to make the driver feel better.<br />
<br />
=== Dates ===<br />
Start date: 2007/11/09<br />
<br />
End date: 2008/11/09<br />
<br />
=== Internet site(s) ===<br />
<br />
=== People involved ===<br />
<br />
==== Project leaders ====<br />
<br />
* [[User:AndreaBonarini|Andrea Bonarini]]<br />
* [[User:MatteoMatteucci|Matteo Matteucci]]<br />
<br />
==== Other Politecnico di Milano people ====<br />
<br />
* [[User:SimoneTognetti|Simone Tognetti]]<br />
<br />
==== Students ====<br />
'''Students currently working on the project'''<br />
<br />
* [[User:PamelaGotti|Pamela Gotti]]<br />
* [[User:CristianMandelli|Cristian Mandelli]]<br />
<br />
=== Laboratory work and risk analysis ===<br />
<br />
Laboratory work for this project will be mainly performed at AIRLab/Lambrate. It will include electrical and electronic activity. Potentially risky activities are the following:<br />
* Use of soldering iron. Standard safety measures described in [http://airlab.elet.polimi.it/index.php/airlab/content/download/461/4110/file/documento_valutazione_rischi_AIRLab.pdf Safety norms] will be followed.<br />
* Use of high-voltage circuits. Special gloves and a current limiter will be used.<br />
* Robot testing. Standard safety measures described in [http://airlab.elet.polimi.it/index.php/airlab/content/download/461/4110/file/documento_valutazione_rischi_AIRLab.pdf Safety norms] will be followed.<br />
* Use of a modified (human-guided) golf cart. We will use the cart only in open-air environments.<br />
<br />
== '''Project description''' ==<br />
The project develops on parallel lines.<br />
<br />
Pamela Gotti is working on the analysis of biological signals and is considering other "natural" sources for signals, such as sensors on the steering wheel and the chair.<br />
<br />
Cristian Mandelli is working on the interpretation of facial expressions and movements of the head and upper part of the body.</div>CristianMandellihttps://airwiki.elet.polimi.it/index.php?title=Driving_companions&diff=5251Driving companions2009-02-21T10:56:13Z<p>CristianMandelli: </p>
<hr />
<div>== '''Project profile''' ==<br />
<br />
=== Project name ===<br />
<br />
Driving companions<br />
<br />
=== Project short description ===<br />
<br />
<br />
The objective of this project is to develop a framework (hw/sw) to be used to recognize emotion while driving. In Automotive field there is an increasing interest for more trustable and confortable cars. For this reason we want to develop a system that is able to recognize emotion in a car environment. The purpose could be to advise in the case of high stress or to control the car to make the driver feel better.<br />
<br />
=== Dates ===<br />
Start date: 2007/11/09<br />
<br />
End date: 2008/11/09<br />
<br />
=== Internet site(s) ===<br />
<br />
=== People involved ===<br />
<br />
==== Project leaders ====<br />
<br />
* [[User:AndreaBonarini|Andrea Bonarini]]<br />
* [[User:MatteoMatteucci|Matteo Matteucci]]<br />
<br />
==== Other Politecnico di Milano people ====<br />
<br />
* [[User:SimoneTognetti|Simone Tognetti]]<br />
<br />
==== Students ====<br />
'''Students currently working on the project'''<br />
<br />
* [[User:PamelaGotti|Pamela Gotti]]<br />
* [[User:CristianMandelli|Cristian Mandelli]]<br />
<br />
=== Laboratory work and risk analysis ===<br />
<br />
Laboratory work for this project will be mainly performed at AIRLab/Lambrate. It will include electrical and electronic activity. Potentially risky activities are the following:<br />
* Use of soldering iron. Standard safety measures described in [http://airlab.elet.polimi.it/index.php/airlab/content/download/461/4110/file/documento_valutazione_rischi_AIRLab.pdf Safety norms] will be followed.<br />
* Use of high-voltage circuits. Special gloves and a current limiter will be used.<br />
* Robot testing. Standard safety measures described in [http://airlab.elet.polimi.it/index.php/airlab/content/download/461/4110/file/documento_valutazione_rischi_AIRLab.pdf Safety norms] will be followed.<br />
* Use of a modified (human-guided) golf cart. We will use the cart only in open-air environments.<br />
<br />
== '''Project description''' ==<br />
The project develops on parallel lines.<br />
<br />
Pamela Gotti is working on the analysis of biological signals and is considering other "natural" sources for signals, such as sensors on the steering wheel and the chair.<br />
<br />
Cristian Mandelli is working on the interpretation of facial expressions and movements of the head and upper part of the body. [[prova]]</div>CristianMandellihttps://airwiki.elet.polimi.it/index.php?title=User:CristianMandelli&diff=3160User:CristianMandelli2008-05-26T12:51:18Z<p>CristianMandelli: </p>
<hr />
<div>{{User<br />
|firstname=Cristian<br />
|lastname=Mandelli<br />
|email=cristianmandelli(dot)gmail(dot)com<br />
|advisor=SimoneTognetti<br />
|projectpage=Driving companions<br />
|photo= CristianMandelli-photo.jpg<br />
}}</div>CristianMandellihttps://airwiki.elet.polimi.it/index.php?title=User:CristianMandelli&diff=3159User:CristianMandelli2008-05-26T12:50:40Z<p>CristianMandelli: </p>
<hr />
<div>{{User<br />
|firstname=Cristian<br />
|lastname=Mandelli<br />
|email=cristianmandelli (dot) gmail (dot) com<br />
|advisor=SimoneTognetti<br />
|projectpage=Driving companions<br />
|photo= CristianMandelli-photo.jpg<br />
}}</div>CristianMandellihttps://airwiki.elet.polimi.it/index.php?title=File:CristianMandelli-photo.jpg&diff=3158File:CristianMandelli-photo.jpg2008-05-26T12:50:06Z<p>CristianMandelli: </p>
<hr />
<div></div>CristianMandellihttps://airwiki.elet.polimi.it/index.php?title=User:CristianMandelli&diff=3157User:CristianMandelli2008-05-26T12:48:14Z<p>CristianMandelli: </p>
<hr />
<div>{{User<br />
|firstname=Cristian<br />
|lastname=Mandelli<br />
|email=cristianmandelli (dot) gmail (dot) com<br />
|advisor=SimoneTognetti<br />
|projectpage=Driving companions<br />
|photo= :)<br />
}}</div>CristianMandellihttps://airwiki.elet.polimi.it/index.php?title=Driving_companions&diff=3089Driving companions2008-05-19T11:37:43Z<p>CristianMandelli: </p>
<hr />
<div>== '''Project profile''' ==<br />
<br />
=== Project name ===<br />
<br />
Driving companions<br />
<br />
=== Project short description ===<br />
<br />
<br />
The objective of this project is to develop a framework (hw/sw) to be used to recognize emotion while driving. In Automotive field there is an increasing interest for more trustable and confortable cars. For this reason we want to develop a system that is able to recognize emotion in a car environment. The purpose could be to advise in the case of high stress or to control the car to make the driver feel better.<br />
<br />
=== Dates ===<br />
Start date: 2007/11/09<br />
<br />
End date: 2008/11/09<br />
<br />
=== Internet site(s) ===<br />
<br />
=== People involved ===<br />
<br />
==== Project leaders ====<br />
<br />
* [[User:AndreaBonarini|Andrea Bonarini]]<br />
* [[User:MatteoMatteucci|Matteo Matteucci]]<br />
<br />
==== Other Politecnico di Milano people ====<br />
<br />
* [[User:SimoneTognetti|Simone Tognetti]]<br />
<br />
==== Students ====<br />
'''Students currently working on the project'''<br />
<br />
* [[User:PamelaGotti|Pamela Gotti]]<br />
* [[User:CristianMandelli|Cristian Mandelli]]<br />
<br />
=== Laboratory work and risk analysis ===<br />
<br />
Laboratory work for this project will be mainly performed at AIRLab/Lambrate. It will include electrical and electronic activity. Potentially risky activities are the following:<br />
* Use of soldering iron. Standard safety measures described in [http://airlab.elet.polimi.it/index.php/airlab/content/download/461/4110/file/documento_valutazione_rischi_AIRLab.pdf Safety norms] will be followed.<br />
* Use of high-voltage circuits. Special gloves and a current limiter will be used.<br />
* Robot testing. Standard safety measures described in [http://airlab.elet.polimi.it/index.php/airlab/content/download/461/4110/file/documento_valutazione_rischi_AIRLab.pdf Safety norms] will be followed.<br />
* Use of a modified (human-guided) golf cart. We will use the cart only in open-air environments.<br />
<br />
== '''Project description''' ==<br />
The project develops on parallel lines.<br />
<br />
Pamela Gotti is working on the analysis of biological signals and is considering other "natural" sources for signals, such as sensors on the steering wheel and the chair.<br />
<br />
Cristian Mandelli is working on the interpretation of facial expressions and movements of the head and upper part of the body.</div>CristianMandellihttps://airwiki.elet.polimi.it/index.php?title=Driving_companions&diff=3088Driving companions2008-05-19T11:37:04Z<p>CristianMandelli: </p>
<hr />
<div>== '''Part 1: project profile''' ==<br />
<br />
=== Project name ===<br />
<br />
Driving companions<br />
<br />
=== Project short description ===<br />
<br />
<br />
The objective of this project is to develop a framework (hw/sw) to be used to recognize emotion while driving. In Automotive field there is an increasing interest for more trustable and confortable cars. For this reason we want to develop a system that is able to recognize emotion in a car environment. The purpose could be to advise in the case of high stress or to control the car to make the driver feel better.<br />
<br />
=== Dates ===<br />
Start date: 2007/11/09<br />
<br />
End date: 2008/11/09<br />
<br />
=== Internet site(s) ===<br />
<br />
=== People involved ===<br />
<br />
==== Project leaders ====<br />
<br />
* [[User:AndreaBonarini|Andrea Bonarini]]<br />
* [[User:MatteoMatteucci|Matteo Matteucci]]<br />
<br />
==== Other Politecnico di Milano people ====<br />
<br />
* [[User:SimoneTognetti|Simone Tognetti]]<br />
<br />
==== Students ====<br />
'''Students currently working on the project'''<br />
<br />
* [[User:PamelaGotti|Pamela Gotti]]<br />
* [[User:CristianMandelli|Cristian Mandelli]]<br />
<br />
=== Laboratory work and risk analysis ===<br />
<br />
Laboratory work for this project will be mainly performed at AIRLab/Lambrate. It will include electrical and electronic activity. Potentially risky activities are the following:<br />
* Use of soldering iron. Standard safety measures described in [http://airlab.elet.polimi.it/index.php/airlab/content/download/461/4110/file/documento_valutazione_rischi_AIRLab.pdf Safety norms] will be followed.<br />
* Use of high-voltage circuits. Special gloves and a current limiter will be used.<br />
* Robot testing. Standard safety measures described in [http://airlab.elet.polimi.it/index.php/airlab/content/download/461/4110/file/documento_valutazione_rischi_AIRLab.pdf Safety norms] will be followed.<br />
* Use of a modified (human-guided) golf cart. We will use the cart only in open-air environments.<br />
<br />
== '''Part 2: project description''' ==<br />
The project develops on parallel lines.<br />
<br />
Pamela Gotti is working on the analysis of biological signals and is considering other "natural" sources for signals, such as sensors on the steering wheel and the chair.<br />
<br />
Cristian Mandelli is working on the interpretation of facial expressions and movements of the head and upper part of the body.</div>CristianMandellihttps://airwiki.elet.polimi.it/index.php?title=Driving_companions&diff=3087Driving companions2008-05-19T11:36:34Z<p>CristianMandelli: </p>
<hr />
<div>== '''Part 1: project profile''' ==<br />
<br />
=== Project name ===<br />
<br />
Driving companions<br />
<br />
=== Project short description ===<br />
<br />
<br />
The objective of this project is to develop a framework (hw/sw) to be used to recognize emotion while driving. In Automotive field there is an increasing interest for more trustable and confortable cars. For this reason we want to develop a system that is able to recognize emotion in a car environment. The purpose could be to advise in the case of high stress or to control the car to make the driver feel better.<br />
<br />
=== Dates ===<br />
Start date: 2007/11/09<br />
<br />
End date: 2008/11/09<br />
<br />
=== Internet site(s) ===<br />
<br />
=== People involved ===<br />
<br />
==== Project leaders ====<br />
<br />
* [[User:AndreaBonarini|Andrea Bonarini]]<br />
* [[User:MatteoMatteucci|Matteo Matteucci]]<br />
<br />
==== Other Politecnico di Milano people ====<br />
<br />
* [[User:SimoneTognetti|Simone Tognetti]]<br />
<br />
==== Students ====<br />
'''Students currently working on the project'''<br />
<br />
* [[User:PamelaGotti|Pamela Gotti]]<br />
* [[User:CristianMandelli|Cristian Mandelli]]<br />
<br />
=== Laboratory work and risk analysis ===<br />
<br />
Laboratory work for this project will be mainly performed at AIRLab/Lambrate. It will include electrical and electronic activity. Potentially risky activities are the following:<br />
* Use of soldering iron. Standard safety measures described in [http://airlab.elet.polimi.it/index.php/airlab/content/download/461/4110/file/documento_valutazione_rischi_AIRLab.pdf Safety norms] will be followed.<br />
* Use of high-voltage circuits. Special gloves and a current limiter will be used.<br />
* Robot testing. Standard safety measures described in [http://airlab.elet.polimi.it/index.php/airlab/content/download/461/4110/file/documento_valutazione_rischi_AIRLab.pdf Safety norms] will be followed.<br />
* Use of a modified (human-guided) golf cart. We will use the cart only in open-air environments.<br />
<br />
== '''Part 2: project description''' ==<br />
The project develops on parallel lines.<br />
Pamela Gotti is working on the analysis of biological signals and is considering other "natural" sources for signals, such as sensors on the steering wheel and the chair.<br />
<br />
Cristian Mandelli is working on the interpretation of facial expressions and movements of the head and upper part of the body.</div>CristianMandelli