robot to work effectively in homes and other non-industrial environments, the way they are instructed to perform their jobs, and especially how they will be told to stop will be of critical importance. The people who interact with them may have little or no training in robotics, and so any interface will need to be extremely intuitive. Science fiction authors also typically assume that robots will eventually be capable of communicating with humans through [[speech]], [[gesture]]s, and [[facial expression]]s, rather than a [[command-line interface]]. Although speech would be the most natural way for the human to communicate, it is quite unnatural for the robot. It will be quite a while before robots interact as naturally as the fictional [[C-3PO]].* '''Speech recognition:''' Interpreting the continuous flow of [[sound]]s coming from a human ([[speech recognition]]), in [[real-time computing|real time]], is a difficult task for a computer, mostly because of the great variability of [[Manner of articulation|speech]]. The same [[word]], spoken by the same person may sound different depending on local [[acoustics]], [[volume]], the previous word, whether or not the speaker has a [[Common cold|cold]], etc.. It becomes even harder when the speaker has a different [[accent (sociolinguistics)|accent]].[http://cslu.cse.ogi.edu/HLTsurvey/ch1node4.html Survey of the State of the Art in Human Language Technology: 1.2: Speech Recognition] Nevertheless, great strides have been made in the field since Davis, Biddulph, and Balashek designed the first "voice input system" which recognized "ten digits spoken by a single user with 100% accuracy" in 1952.Fournier, Randolph Scott., and B. June. Schmidt. "Voice Input Technology: Learning Style and Attitude Toward Its Use." Delta Pi Epsilon Journal 37 (1995): 1_12.{{cite web|url=http://www.dragon-medical-transcription.com/history_speech_recognition.html|publisher=Dragon Naturally Speaking|title=History of Speech & Voice Recognition and Transcription Software|accessdate=2007-10-27}}* '''Gestures:''' One can imagine, in the future, explaining to a robot chef how to make a pastry, or asking directions from a robot police officer. On both of these occasions, making hand [[gesture]]s would aid the verbal descriptions. In the first case, the robot would be recognizing gestures made by the human, and perhaps repeating them for confirmation. In the second case, the robot police officer would gesture to indicate "down the road, then turn right". It is quite likely that gestures will make up a part of the interaction between humans and robots.{{cite paper|url=http://robots.stanford.edu/papers/waldherr.gestures-journal.pdf|format=[[PDF]]|title=A Gesture Based Interface for Human-Robot Interaction|publisher=Kluwer Academic Publishers|author=Waldherr, Romero & Thrun|date=2000|accessdate=2007-10-28}} A great many systems have been developed to recognize human hand gestures.{{cite web|url=http://ls7-www.cs.uni-dortmund.de/research/gesture/vbgr-table.html|title=Vision Based Hand Gesture Recognition Systems|author=Markus Kohler|publisher=University of Dortmund|accessdate=2007-10-28}}{{Dead link|date=December 2008}}* '''Facial expression:''' [[Facial expression]]s can provide rapid feedback on the progress of a dialog between two humans, and soon it may be able to do the same for humans and robots. Robotic faces have been constructed by [[David Hanson (robotics designer)|Hanson Robotics]] using their elastic polymer called Frubber, allowing a great amount of facial expressions due to the elasticity of the rubber facial coating and imbedded subsurface motors ([[servomechanism|servos]]) to produce the facial expressions.[http://www.hansonrobotics.com/innovations.html Frubber facial expressions] The coating and servos are built on a metal [[skull]]. A robot should know how to approach a human, judging by their facial expression and body language. Whether the person is happy, frightened, or crazy-looking affects the type of interaction expected of the robot. Likewise, robots like [[Kismet (robot)|Kismet]] and the more recent addition, Nexi[http://www.time.com/time/specials/packages/article/0,28804,1852747_1854195_1854135,00.html Nexi facial expressions] can produce a range of facial expressions, allowing it to have meaningful social exchanges with humans.{{cite web|url=http://www.samogden.com/Kismet.html|title=Kismet: Robot at MIT's AI Lab Interacts With Humans|publisher=Sam Ogden|accessdate=2007-10-28}}* '''Artificial emotions''' Artificial emotions can also be imbedded and are composed of a sequence of facial expressions and/or gestures. As can be seen from the movie [[Final Fantasy: The Spirits Within]], the programming of these artificial emotions is quite complex and requires a great amount of human observation. To simplify this programming in the movie, presets were created together with a special software program. This decreased the amount of time needed to make the film. These presets could possibly be transferred for use in real-life robots. Currently, the best systems can recognize continuous, natural speech, up to 160 words per minute, with an accuracy of 95%.
something which may or may not be desirable in the commercial robots of the future.[http://www.cs.ubc.ca/~van/GI2005/Posters/GI_abstract.pdf (Park et al. 2005) Synthetic Personality in Robots and its Effect on Human-Robot Relationship] Nevertheless, recearchers are trying to create robots which appear to have a personality:[http://www.npr.org/templates/story/story.php?storyId=5067678 National Public Radio: Robot Receptionist Dishes Directions and Attitude][http://viterbi.usc.edu/tools/download/?asset=/assets/023/49186.pdf&name=nsmaja.pdf New Scientist: A good robot has personality but not looks] i.e. they use sounds, facial expressions, and body language to try to convey an internal state, which may be joy, sadness, or fear.
Tidak ada komentar:
Posting Komentar