Search This Blog

Showing posts with label Electronics / Robotics. Show all posts
Showing posts with label Electronics / Robotics. Show all posts

Sunday, April 10, 2016

Robotic farming..is it going to be the futuristic technology considering the labor shortages


Robotic farming..is it going to be the futuristic technology considering the labor shortages..experiments are on for another revolution in the agriculture sector!!


Robotic farms may be the future of crop production, and Japan is on its way to launching the first of its kind. Spread, a vegetable producer based in Kyoto, promises that pesticide-free lettuce will pack more nutrients, cost less to produce than current conventional farming techniques, and will increase output incomparably faster.

"Seed planting will still be done by people, but the rest of the process, including harvesting, will be done by  industrial robots," company official Koji Morisada told AFP. Morisada added that the robot labor would cut personnel costs by roughly half and reduce energy expenses down by nearly one third thanks to the LED lighting they plan on implementing.
In 2012, the Japanese-based firm announced they would be the first company in the world to launch a fully automated farm with robots in charge of nearly every step in the process. But now the promise has finally come to fruition — the company’s begun growing the lettuce plants operated by robots that resemble human arms. The “indoor grow house” will begin operation by mid-2017, with the plan to produce 30,000 heads of lettuce a day. Their goal is to increase production to half a million heads a day within five years of opening. 
This futuristic lettuce plant is an advanced type of hydroponic indoor vegetable growing operation, which allows the farming process to move indoors where the sun never shines.Sunless farming relies on darkened rooms illuminated by blue and red LED lights.

These smart farms are climate-controlled farming units that allow growers to profit indoors, a system that was created out of tragedy. The Shigeharu Shimamura farming company opened in 2004 after a nuclear disaster led to food shortages. An abandoned factory was transformed into the world’s biggest indoor farm with 25,000 square feet, currently producing up to 10,000 heads of lettuce a day—100 times more per square foot than current farming methods. The plants grew twice as fast using 40 percent less power, 80 percent less food waste, and 99 percent less water usage than outdoor farm fields.
The robot-run farm is predicted to outdo Shimamura’s indoor farms using less space with increased production. The automated innovation will increase the company’s lettuce production from 21,000 heads a day to 50,000. The farm measured about 4,400 square meters with floor-to-ceiling shelves for the produce to grow from. The entirely automated agricultural system is an effort to compensate for labor shortages elsewhere in the country’s economy. The company plans to build more robotic plant farms throughout Japan, with the long-term goal of tapping into overseas markets. 

Sunday, March 20, 2016

This Factory Robot Learns a New Job Overnight : Cloud Robotics



The world's largest robot maker, Fanuc, is developing robots that use reinforcement learning to figure out how to do things.
Inside a modest-looking office building in Tokyo lives an unusually clever industrial robot made by the Japanese company Fanuc. Give the robot a task, like picking widgets out of one box and putting them into another container, and it will spend the night figuring out how to do it. Come morning, the machine should have mastered the job as well as if it had been programmed by an expert.
Fanuc demonstrates a robot trained through reinforcement learning at the International Robot Exhibition in Tokyo in December.
Industrial robots are capable of extreme precision and speed, but they normally need to be programmed very carefully in order to do something like grasp an object. This is difficult and time-consuming, and it means that such robots can usually work only in tightly controlled environments.
Fanuc’s robot uses a technique known as deep reinforcement learning to train itself, over time, how to learn a new task. It tries picking up objects while capturing video footage of the process. Each time it succeeds or fails, it remembers how the object looked, knowledge that is used to refine a deep learning model, or a large neural network, that controls its action. Deep learning has proved to be a powerful approach in pattern recognition over the past few years.
“After eight hours or so it gets to 90 percent accuracy or above, which is almost the same as if an expert were to program it,” explains Shohei Hido, chief research officer at Preferred Networks, a Tokyo-based company specializing in machine learning. “It works overnight; the next morning it is tuned.”
Robotics researchers are testing reinforcement learning as a way to simplify and speed up the programming of robots that do factory work. Earlier this month, Google published details of its own research on using reinforcement learning to teach robots how to grasp objects.
The Fanuc robot was programmed by Preferred Networks. Fanuc, the world’s largest maker of industrial robots, invested $7.3 million in Preferred Networks in August last year. The companies demonstrated the learning robot at the International Robot Exhibition in Tokyo last December.
One of the big potential benefits of the learning approach, Hido says, is that it can be accelerated if several robots wok in parallel
and then share what they have learned. So eight robots working for one hour can perform the same learning as a machine going for eight hours. "Our project is oriented to distributed learning." Hido says. "You can imagine hundreds of factory robots sharing information."
This form of distributed learning, sometimes called "cloud robotics,|" is shaping to be a big trend both in research
and industry.
"Fanuc is well place to think about this," says Ken Goldberg, a professor of robotics at the University of California Berkeley, because it installs so many machine in factories around the world. He adds that cloud robotics will most likely reshape the way that robots are used in the coming years.
Goldberg and colleagues (include several resuarchers in Google) are in fact taking this a step further by teaching robots how certain movements may be used to grasp not only specific objects
but also certain shapes. A paper on this work will appear at the IEEE International Conference on Robotics and Automation this May.
However, Goldberg notes, applying machine learning to robotics is challenging because controlling behavior is more complex than, say, recognizing objects in images. "Deep Learning has made enormous progress i pattern recognition ," Goldberg syas. "The challenge in robotics is that you are doing something beyond that. You need to be able to generate appropriate action for a huge range of inputs."
Fanuc may not be the only company developing robots that use machine learning. In 2014, a Swiss robot manufacturer ABB invested in another AI startup called Vicarious. The fruits of the investment has yet to appear, however.
Cloud Robotics and Automation
What if robots and automation systems were not limited by onboard computation, memory, or software?
Rather than viewing robots and automated machines as isolated systems with limited computation and memory, "Cloud Robotics and Automation" considers a new paradigm where robots and automation systems exchange data and perform computation via networks. Extending earlier work that links robots to the Internet,
Cloud Robotics and Automation builds on emerging research in cloud computing, machine learning, big data, open-source software, and major industry initiatives in the "Internet of Things", "Smarter Planet", "Industrial Internet", and "Industry 4.0."
Consider Google's autonomous car. It uses the network to index maps, images, and data on prior driving trajectories, weather, and traffic to determine spatial localization and make decisions. Data from each car is shared via the network for statistical optimization and machine learning performed by grid computing in the Cloud.
Another example is Kiva Systems approach to warehouse automation and logistics using large numbers of mobile platforms to move pallets using a local network to coordinate platforms and share updates on floor conditions.
Google's James Kuffner coined the term "Cloud Robotics" in 2010. Cloud Robot and Automation systems can be broadly defined as any robot or automation system that relies on data or code from a network to support its operation, i.e., where not all sensing, computation, and memory is integrated into a single standalone system.
There are at least four potential advantages to using the Cloud: 1) Big Data: access to updated libraries of images, maps, and object/product data, 2) Cloud Computing: access to parallel grid computing on demand for statistical analysis, learning, and motion planning, 3) Collective Learning: robots and systems sharing trajectories, control policies, and outcomes, and 4) Human Computation: use of crowdsourcing to tap human skills for analyzing images and video, classification, learning, and error recovery. The Cloud can also provide access to a) datasets, publications, models, benchmarks, and simulation tools, b) open competitions for designs and systems, and c) open-source software. It is important to recognize that Cloud Robotics and Automation raises critical new questions related to network latency, quality of service, privacy, and security.
The term "Singularity" is sometimes used to describe a punctuation point in the future where Artificial Intelligence (AI) surpasses human intelligence. The term was popularized by science fiction author Vernor Vinge and Ray Kurzweil. Superintelligence, a 2014 book by Nick Bostrom, explored similar themes that provoked Stephen Hawking, Elon Musk, and Bill Gates to issue warnings about the dangers of AI and robotics. My sense is that the Singularity is distracting attention from a far more realistic and important development that we might call "Multiplicity". Multiplicity characterizes an emerging category of systems where diverse groups of humans work together with diverse groups of machines to solve difficult problems. Multiplicity combines the wisdom of crowds with the power of cloud computing and is exemplified by many Cloud Robotics and Automation systems.

Saturday, March 12, 2016

Robot drones could do Reforestation

Reforestation is the natural or intentional restocking of existing forests and woodlands that have been depleted, usually through deforestation.

A drone, in a technological context, is an unmanned aircraft. 

Drones are more formally known as unmanned aerial vehicles (UAV). Essentially, a drone is a flying robot. The aircraft may be remotely controlled or can fly autonomously through software-controlled flight plans in their embedded systems working in conjunction with GPS.  UAVs have most often been associated with the military but they are also used for search and rescue, surveillance, traffic monitoring, weather monitoring and firefighting, among other things.

Saturday, March 5, 2016

Mind-Controlled Prosthetic Arm Moves Individual ‘Fingers'



Closer to Enabling Piano Playing
Physicians and biomedical engineers from Johns Hopkins report what they believe is the first successful effort to wiggle fingers individually and independently of each other using a mind-controlled artificial “arm” to control the movement.
The proof-of-concept feat, described online this week in the Journal of Neural Engineering, represents a potential advance in technologies to restore refined hand function to those who have lost arms to injury or disease, the researchers say. The young man on whom the experiment was performed was not missing an arm or hand, but he was outfitted with a device that essentially took advantage of a brain-mapping procedure to bypass control of his own arm and hand.
“We believe this is the first time a person using a mind-controlled prosthesis has immediately performed individual digit movements without extensive training,” says senior author Nathan Crone, M.D., professor of neurology at the Johns Hopkins University School of Medicine. “This technology goes beyond available prostheses, in which the artificial digits, or fingers, moved as a single unit to make a grabbing motion, like one used to grip a tennis ball.
For the experiment, the research team recruited a young man with epilepsy already scheduled to undergo brain mapping at The Johns Hopkins Hospital’s Epilepsy Monitoring Unit to pinpoint the origin of his seizures.

Friday, October 23, 2015

Simulated brain cells give robot instinctive navigation skills



One robot has been given a simulated version of the brain cells that let animals build a mental map of their surroundings.
The behavior and interplay of two types of neurons in the brain helps give humans and other animals an uncanny ability to navigate by building a mental map of their surroundings. Now one robot has been given a similar cluster of virtual cells to help it find its own way around.
Researchers in Singapore simulated two types of cells known to be used for navigation in the brain—so-called “place” and “grid” cells—and showed they could enable a small-wheeled robot to find its way around. Rather than simulate the cells physically, they created a simple two-dimensional model of the cells in software. The work was led by Haizhou Li, a professor at the Agency for Science, Technology and Research (A*STAR).
“Artificial grid cells could provide an adaptive and robust mapping and navigation system,” Li wrote in an e-mail coauthored with Huajin Tang and Yuan Miaolong, two research scientists at A*STAR who coauthored a paper about the work. “Humans and animals have an instinctual ability to navigate freely and deliberately in an environment rather effortlessly.”
The work is significant because it shows the potential for having machines mimic more complex activity in the brain. Roboticists increasingly use artificial neural networks to train robots to perform tasks such as object recognition and grasping, but these networks do not faithfully reflect the complexity and subtlety of a real biological brain.

Tuesday, September 15, 2015

Robotic Limbs Get a Sense of Touch

Advanced prosthetics have for the past few years begun tapping into brain signals to provide amputees with impressive new levels of control. Patients think, and a limb moves. But getting a robotic arm or hand to sense what it’s touching, and send that feeling back to the brain, has been a harder task.

The U.S. Defense Department’s research division last week claimed a breakthrough in this area, issuing a press release touting a 28-year-oldparalyzed person’s ability to “feel” physical sensations through a prosthetic hand. Researchers have directly connected the artificial appendage to his brain, giving him the ability to even identify which mechanical finger is being gently touched, according to the Defense Advanced Research Projects Agency (DARPA). In 2013, other scientists at Case Western Reserve University also gave touch to amputees, giving patients precise-enough feeling of pressure in their fingertips to allow them to twist the stems off cherries.
The government isn’t providing much detail at this time about its achievement other than to say that researchers ran wires from arrays connected to the volunteer’s sensory and motor cortices—which identify tactile sensations and control body movements, respectively—to a mechanical hand developed by the Applied Physics Laboratory (APL) at Johns Hopkins University. The APL hand’s torque sensors can convert pressure applied to any of its fingers into electrical signals routed back to the volunteer’s brain.

Thursday, August 20, 2015

What is Prosthesis ?

  1. prosthesis is a device designed to replace a missing part of the body or to make a part of the body work better. Diseased or missing eyes, arms, hands, legs, or joints are commonly replaced by prosthetic devices.
Maxence was born without a right hand, but on Monday the six-year-old French boy got one through an effort highlighting the growing use of 3D printing technology to make prostheses.
"He is going to have a superhero hand the colour of his choice, that he can take off when he wishes," said his mother Virginie.
"It will be fun for him on the school yard with his friends."
The prosthesis comes through an American foundation called e-NABLE, which since 2013 has been connecting owners of 3D printers with families of children missing fingers or hands.
More than 1,500 prostheses have already been provided through the foundation, and the hand for Maxence was the group's first in France.
The device, which is worn like a glove and attatches with Velcro, cost less than 50 euros ($55) to produce and can easily be replaced with a larger model as the boy grows up.
It is designed for children who, like Maxence, have a wrist and a palm. The artificial hand grasps objects when the user bends his or her wrist, and is attached without surgery.
The prothesis does not allow for more precise activities like tying shoes, but does allow users to do things like riding on swings or a scooter that are difficult to do without fingers.
According to Thierry Oquidam, the volunteer who produced the prothesis, the advantage of the hand is its "fun" aspects which can make the child feel like he is dressed up in a costume instead of wearing a medical prothesis.
[This piece of information may be the most amazing thing in the life of a child who is missing fingers or hand. So, kindly let more people know about this noble foundation.]



Friday, July 17, 2015

Graphene-based film can be used for efficient cooling of electronics.




Researchers at Chalmers University of Technology have developed a method for efficiently cooling electronics using graphene based film. The film has a thermal conductivity capacity that is four times that of copper. Moreover, the graphene film is attachable to electronic components made of silicon, which favours the film's performance compared to typical graphene characteristics shown in previous, similar experiments.
Electronic systems available today accumulate a great deal of heat, mostly due to the ever-increasing demand on functionality. Getting rid of excess heat in efficient ways is imperative to prolonging electronic lifespan, and would also lead to a considerable reduction in energy usage. According to an American study, approximately half the energy required to run computer servers, is used for cooling purposes alone.
A couple of years ago, a research team led by Johan Liu, professor at Chalmers University of Technology, were the first to show that graphene can have a cooling effect on silicon based electronics.That was the starting point for researchers conducting research on the cooling of silicon-based electronics using graphene. "But the methods that have been in place so far have presented the researchers with problems," Johan Liu says. "It has become evident that those methods cannot be used to rid electronic devices off great amounts of heat, because they have consisted only of a few layers of thermal conductive atoms. When you try to add more layers of graphene, another problem arises, a problem with adhesiveness.
After having increased the amount of layers, the graphene no longer will adhere to the surface, since the adhesion is held together only by weak van der Waals bonds." "We have now solved this problem by managing to create strong covalent bonds between the graphene film and the surface, which is an electronic component made of silicon," he continues.
The stronger bonds result from so-called functionalisation of the graphene, i.e. the addition of a property-altering molecule. Having tested several different additives, the Chalmers researchers concluded that an addition of (3-Aminopropyl) triethoxysilane (APTES) molecules has the most desired effect. When heated and put through hydrolysis, it creates so-called silane bonds between the graphene and the electronic component (see picture).
Moreover, functionalisation using silane coupling doubles the thermal conductivity of the graphene. The researchers have shown that the in-plane thermal conductivity of the graphene-based film, with 20 micrometer thickness, can reach a thermal conductivity value of 1600 W/mK, which is four times that of copper.
"Increased thermal capacity could lead to several new applications for graphene," says Johan Liu. "One example is the integration of graphene-based film into microelectronic devices and systems, such as highly efficient Light Emitting Diodes (LEDs), lasers and radio frequency components for cooling purposes. Graphene-based film could also pave the way for faster, smaller, more energy efficient, sustainable high power electronics."
The research was conducted in collaboration l with Shanghai University in China, Ecole Centrale Paris and EM2C -- CNRS in France, and SHT Smart High Tech in Sweden.
SOURCE: Science Daily.

Tuesday, September 9, 2014

Untethered, autonomous soft robot developed

Researchers have developed a shape-changing robot that walks on four legs, can operate without the constraints of a tether, function in a snowstorm, move through puddles of water, and even withstand limited exposure to flames.

The soft robot is capable of functioning for several hours using a battery pack or for longer periods with a light—weight electrical tether, and able to carry payloads of up to 8 kg.

The robot has been designed by a multidisciplinary team of researchers, including those from the School of Engineering and Applied Sciences, Wyss Institute for Biologically Inspired Engineering, and Department of Chemistry and Chemical Biology, at Harvard University, and the School of Mechanical and Aerospace Engineering at Cornell University.

Robots intended for use outside of laboratory environments should be able to operate without the constraints of a tether; this is especially true for robots intended to perform demanding tasks in challenging environments (for example, for search and rescue applications in unstable rubble), researchers said.

“We have developed composite soft materials, a mechanical design, and a fabrication method that enable the untethered operation of a soft robot without any rigid structural components,” researchers said.

Wednesday, March 5, 2014

Mechatronics

Mechatronics is a design process that includes a combination of mechanical engineering, electrical engineering, telecommunications engineering, control engineering and computer engineering. Mechatronics is a multidisciplinary field of engineering, that is to say, it rejects splitting engineering into separate disciplines. Originally, mechatronics just included the combination of mechanics and electronics, hence the word is a combination of mechanics and electronics.
Mechatronics is a design process that includes a combination of mechanical engineering, electrical engineering, telecommunications engineering, control engineering and computer engineering. Mechatronics is a multidisciplinary field of engineering, that is to say, it rejects splitting engineering into separate disciplines. Originally, mechatronics just included the combination of mechanics and electronics, hence the word is a combination of mechanics and electronics.
Mechatronics is the synergistic combination of precision mechanical engineeringelectronic control and systems thinking in the design of products and manufacturing processes. It relates to the design of systems, devices and products aimed at achieving an optimal balance between basic mechanical structure and its overall control. The purpose of this journal is to provide rapid publication of topical papers featuring practical developments in mechatronics. It will cover a wide range of application areas including consumer product design, instrumentation, manufacturing methods, computer integration and process and device control, and will attract a readership from across the industrial and academic research spectrum. Particular importance will be attached to aspects of innovation in mechatronics design philosophy which illustrate the benefits obtainable by an a priori integration of functionality with embedded microprocessor control. A major item will be the design of machines, devices and systems possessing a degree of computer based intelligence. The journal seeks to publish research progress in this field with an emphasis on the applied rather than the theoretical. It will also serve the dual role of bringing greater recognition to this important area of engineering.

Friday, September 13, 2013

Bees: perfect flying machines


THE UNIVERSITY OF QUEENSLAND   
kingfisher_bees_shutterstock
Honeybees raise their abdomen to reduce drag and fly at higher speeds using less energy.
Image: kingfisher/Shutterstock
Scientists are harnessing honeybee flight secrets to develop insect-sized robot aircraft. 
A world-first study at UQ's Queensland Brain Institute has found that honeybees use a combination of what they feel and see to streamline their bodies and gain maximum ‘fuel efficiency' by positioning their bodies for swift flight. 
QBI's Professor Mandyam Srinivasan said the discovery could help in the development of robot aircraft, such as small insect-like flying machines. 
“These bees are living proof that it's possible to engineer airborne vehicles that are agile, navigationally competent, weigh less than 100 milligrams, and can fly around the world using the energy given by an ounce of honey,” Professor Srinivasan said. 
“Honeybees often have to travel very long distances with only a small amount of nectar, so they have to be as fuel-efficient as possible,” he said. 
“They achieve this by raising their abdomen to reduce drag so they can fly at high speeds while using less energy.” 
QBI's Mr Gavin Taylor said previous research had found that honeybees used their eyes to sense the airspeed and move their abdomens accordingly. 
“When we trick a honeybee into thinking that it's flying forward by running background images past its eyes, the bee will move its body into a flying position despite being tethered. 
“The faster we move the images, the higher it lifts its abdomen to prepare for rapid flight,” Mr Taylor said. 
“However, if we blow wind directly at it without running any images, the bee raises its abdomen for only a little while. 
“This means that they rely on their vision to regulate their flights.” 
The team created a headwind and ran background images simultaneously, and found the bee raised its abdomen much higher than when the fan was switched off, indicating the streamlining response was also driven by airflow. 
Professor Srinivasan said the honeybee sensed airflow with its antenna. 
“As soon as we immobilised the bee's antenna, its streamlining response was reduced as it relied only on its eyes.” 
Professor Srinivasan said the research could help develop tiny ‘robotbee' aircraft. 
”A better understanding of how these honeybees fly takes us one step further towards perfecting these flying machines,” he said. 
Results of the study, “Vision and airflow combine to streamline flying honeybees”, by Gavin J. Taylor, Tien Luu, David Ball and Mandyam V. Srinivasan, have been published in Scientific Reports
Editor's Note: Original news release can be found here.

Sunday, August 26, 2012

High-tech, remote-controlled camera for neurosurgery




High-tech, remote-controlled camera for neurosurgery© Thinkstock
(Phys.org)—A small camera inserted into the body enables surgeons to perform many types of operations with minimal trauma. EU-funding enabled researchers to extend the use of such interventions to a variety of neurosurgical applications.
The medical field has made great advances in minimising trauma associated with various surgical interventions. Use of surgical microscopes has been influential in guiding a surgeon's tools to the appropriate location and reducing tissue damaged in an effort to ensure all affected areas have been treated.
Within the last 30 years, more and more procedures have lent themselves to endoscopic intervention also called minimally invasive surgery (MIS).
A very small, flexible tube with a camera at its tip is inserted into an incision or natural body opening (e.g. nasal cavity) and directed to the appropriate site for diagnosis and treatment. The camera offers a wide panoramic view superior to the traditional conical view of a surgical microscope.
In the case of neurosurgery where operative and post-operative trauma can lead to debilitating loss of brain function and even death, endoscopic intervention is particularly attractive. However, limitations of available endoscopic surgical systems have excluded their use in many important neurosurgical applications.
In order to extend the use of potentially life-saving endoscopic surgery, European scientists initiated the 'Paraendoscopic intuitive computer assisted operating system' (PICO) project.
With EU-funding, the consortium of small and medium-sized enterprises (SMEs) and research and technology development (RTD) partners produced important endoscopic neurosurgical technology.
The PICO positioning system consisted of a balanced holding-and-motion device with fine motor-driven adjustment. The holding-and-motion system could be attached either to the operating table or to the patient's head.
A novel interface for remote control enabled the surgeon to steer the endoscope without removing their hands from the surgical instruments.
Scientists also incorporated a three-dimensional (3D) visualisation system capable of feeding data to a monitor or head-mounted display. The system enabled voice-controlled delivery of additional information such as pre-operative test results and ultrasound images.
Micro-mechanical surgical instruments for a number of tasks such as suctioning, cutting and sample-taking were specifically designed for endoscopic neurosurgery.
The PICO system is a particularly important contribution to the field of endoscopic neurosurgery. Its market availability should shorten many procedures while reducing associated surgical and post-operative trauma and thus morbidity and mortality.
Provided by CORDIS
"High-tech, remote-controlled camera for neurosurgery." August 24th, 2012. http://medicalxpress.com/news/2012-08-high-tech-remote-controlled-camera-neurosurgery.html
Posted by
Robert Karl Stonjek

Researchers investigate early language acquisition in robots


Researchers investigate early language acquisition in robots(Phys.org)—Research into robotics continues to grow in Europe. And the introduction of humanoid robots has compelled scientists to investigate the acquisition of language. A case in point is a team of researchers in the United Kingdom that studied the development of robots that could acquire linguistic skills. Presented in the journal PLoS ONE, the study focused on early stages analogous to some characteristics of a human child between 6 and 14 months of age, the transition from babbling to first word forms. The results, which shed light on the potential of human-robot interaction systems in studies investigating early language acquisition, are an outcome of the ITALK ('Integration and transfer of action and language knowledge in robots') project, which received EUR 6.3 million under the 'Information and communication technologies' (ICT) Theme of the EU's Seventh Framework Programme (FP7).
Scientists from the Adaptive Systems Research Group at the University of Hertfordshire in the United Kingdom have discovered that a robot analogous to a child between 6 and 14 months old has the ability to develop rudimentary linguistic skills. The robot, called DeeChee, moved from various syllabic babble to various word forms, including colours and shapes, after it 'conversed' with humans. The latter group was told to speak to the robot as if it were a small child.
'It is known that infants are sensitive to the frequency of sounds in speech, and these experiments show how this sensitivity can be modelled and contribute to the learning of word forms by a robot,' said lead author Caroline Lyon of the University of Hertfordshire.
In their paper, the authors wrote: 'We wanted to explore human-robot interaction and were deliberately not prescriptive. However, leaving participants to talk naturally opened up possibilities of a wide range of behaviour, possibilities that were certainly realised. Some participants were better teachers than others: some of the less good produced very sparse utterances, while other talkative participants praised DeeChee whatever it did, which skewed the learning process towards non-words.'
The researchers said one of the reasons that the robot learnt the words is because the teacher said the words repeatedly, an already anticipated response. The second reason is that the non-salient word strings were variable, so their frequencies were spread about. According to the team, this phenomenon is the basis of a number of automated plagiarism detectors, where precise matches of short lexical strings indicate copying. Lastly, they said the phonemic representation of speech from the teacher to the robot is not a uniformly stable mapping of sounds.
'The frequencies of syllables in words with variable phonemic forms may be attenuated compared with those in salient content words, or parts of such words,' they wrote. 'It has long been realised that there is in practice a great deal of variation in spontaneous speech. This work shows the potential of human-interaction systems to be used in studies of language acquisition, and the iterative development methodology highlights how the embodied nature of interaction may bring to light important factors in the dynamics of language acquisition that would otherwise not occur to modellers.'
More information: Lyon, C., et al. 'Interactive Language Learning by Robots: The Transition from Babbling to Word Forms'. PLoS ONE 7(6): e38236. doi:10.1371/journal.pone.0038236
Provided by CORDIS
"Researchers investigate early language acquisition in robots." August 24th, 2012. http://phys.org/news/2012-08-early-language-acquisition-robots.html


Robot NICO learning self awareness using mirrors


tRobot NICO learning self awareness using mirrors


(Phys.org)—Self awareness is one of the hallmarks of intelligence. We as human beings clearly understand that we are both our bodies and our minds and that others perceive us in ways differently than we perceive ourselves. Perhaps nowhere is this more evident than when we look in a mirror.
In so doing we understand that the other person looking back, is really the three dimensional embodiment of who we really are as a complete person. For this reason, researchers use something called the mirror test as a means of discerning other animals' level of self awareness. They put a mark of some sort on the face without the animal knowing it, then allow the animal to look in a mirror; if the animal is able to comprehend that the mark is on its own face, and demonstrates as much by touching itself where it's been marked, than the animal is deemed to have self awareness. Thus far, very few have passed the test, some apes, dolphins and elephants. Now, researchers at Yale University are trying to program a robot that is able to pass the test as well.
The robot's name is NICO, and has been developed by Brian Scassellati and Justin Hart, who together have already taught the robot to recognize where its arm is in three dimensional space to a very fine degree, a feat never before achieved with a robot of any kind. The next step is to do the same with other body parts, the feet, legs torso and of course eventually the head, which is the most critical part in giving a robot self awareness, which is the ultimate goal of the project.
Programming a robot to have self awareness is considered to be one of the key milestones to creating robots that are truly useful in everyday life. Robots that "live" in people's homes for example, would have to have a very good understanding of where every part of itself is and what it's doing in order to prevent causing accidental harm to housemates. This is so because the movements of people are random and haphazard, so much so that people quite often accidently bump into one another. With robots, because they are likely to be stronger, such accidents would be unacceptable.
Scassellati and Hart believe they are getting close and expect NICO to be able to pass the mirror test within the next couple of months. No doubt others will be watching very closely, because if they meet with success it will be a truly historic moment.
© 2012 Phys.Org
"Robot NICO learning self awareness using mirrors." August 24th, 2012. http://phys.org/news/2012-08-robot-nico-awareness-mirrors.html
Posted by
Robert Karl Stonjek

Friday, June 22, 2012

'Hallucinating' robots arrange objects for human use



'Hallucinating' robots arrange objects for human useA robot populates a room with imaginary human stick figures in order to decide where objects should go to suit the needs of humans.
(Phys.org) -- If you hire a robot to help you move into your new apartment, you won't have to send out for pizza. But you will have to give the robot a system for figuring out where things go. The best approach, according to Cornell researchers, is to ask "How will humans use this?"
Researchers in the Personal Robotics Lab of Ashutosh Saxena, assistant professor of computer science, have already taught robots to identify common objects, pick them up and place them stably in appropriate locations. Now they've added the human element by teaching robots to "hallucinate" where and how humans might stand, sit or work in a room, and place objects in their usual relationship to those imaginary people.
Their work will be reported at the International Symposium on Experimental Robotics, June 21 in Quebec, and the International Conference of Machine Learning, June 29 in Edinburgh, Scotland.
Previous work on robotic placement, the researchers note, has relied on modeling relationships between objects. A keyboard goes in front of a monitor, and a mouse goes next to the keyboard. But that doesn't help if the robot puts the monitor, keyboard and mouse at the back of the desk, facing the wall.
'Hallucinating' robots arrange objects for human useAbove left, random placing of objects in a scene puts food on the floor, shoes on the desk and a laptop teetering on the top of the fridge. Considering the relationships between objects (upper right) is better, but he laptop is facing away from a potential user and the food higher than most humans would like. Adding human context (lower left) makes things more accessible. Lower right: how an actual robot carried it out. (Personal Robotics Lab)
Relating objects to humans not only avoids such mistakes but also makes computation easier, the researchers said, because each object is described in terms of its relationship to a small set of human poses, rather than to the long list of other objects in a scene. A computer learns these relationships by observing 3-D images of rooms with objects in them, in which it imagines human figures, placing them in practical relationships with objects and furniture. You don't don't put a sitting person where there is no chair. You can put a sitting person on top of a bookcase, but there are no objects there for the person to use, so that''s ignored. It The computer calculates the distance of objects from various parts of the imagined human figures, and notes the orientation of the objects.
Eventually it learns commonalities: There are lots of imaginary people sitting on the sofa facing the TV, and the TV is always facing them. The remote is usually near a human's reaching arm, seldom near a standing person's feet. "It is more important for a robot to figure out how an object is to be used by humans, rather than what the object is. One key achievement in this work is using unlabeled data to figure out how humans use a space," Saxena said.
In a new situation the a robot places human figures in a 3-D image of a room, locating them in relation to objects and furniture already there. "It puts a sample of human poses in the environment, then figures out which ones are relevant and ignores the others," Saxena explained. It decides where new objects should be placed in relation to the human figures, and carries out the action.
The researchers tested their method using images of living rooms, kitchens and offices from the Google 3-D Warehouse, and later, images of local offices and apartments. Finally, they programmed a robot to carry out the predicted placements in local settings. Volunteers who were not associated with the project rated the placement of each object for correctness on a scale of 1 to 5.
Comparing various algorithms, the researchers found that placements based on human context were more accurate than those based solely in relationships between objects, but the best results of all came from combining human context with object-to-object relationships, with an average score of 4.3. Some tests were done in rooms with furniture and some objects, others in rooms where only a major piece of furniture was present. The object-only method performed significantly worse in the latter case because there was no context to use. "The difference between previous works and our [human to object] method was significantly higher in the case of empty rooms," Saxena reported.
The research was supported by a Microsoft Faculty Fellowship and a gift from Google. Marcus Lin, M.Eng. '12, received an Academic Excellence Award from the Department of Computer Science in part for his work on this project.
Provided by Cornell University
"'Hallucinating' robots arrange objects for human use." June 18th, 2012. http://phys.org/news/2012-06-hallucinating-robots-human.html
Posted by
Robert Karl Stonjek

Wednesday, May 2, 2012

Roboticist creates Hugvie - Huggable vibrating pillow smartphone accessory



(Phys.org) -- Japanese robot designer Hiroshi Ishiguro is fast becoming a sort of roboticist for the people, in Japan anyway. Instead of terminator style robots meant to do a lot of serious work or to serve on the battlefield, his robots are soft and cushy, cute and perhaps a little smooshy. He’s also created a robot in his own image. Now he’s introducing something he calls the Hugvie, a robot that looks sort of like a generic mono-legged human baby, or perhaps a doll with no eyes, fingers or toes. It serves as the medium through which people converse in a new way using a smartphone. While holding, or pressing the Hugvie against the face, it vibrates slightly at the same frequency as the voice on the other end, adding another degree of intimacy to the conversation. At least that’s the idea.
In reality, it’s a stuffed pillow with a little pocket for holding a cell phone. When in use, a hidden gadget listens in and converts the sounds it hears to vibrations which it sends through the pillow to the person holding it.
Ishiguro, an Osaka University professor, and inventor of the Telenoid R1, which has been described as an animated outsized fetus that talks, spoke at a press conference in Tokyo recently, to announce the debut of Hugvie. He said that the robot actually has two vibrators inside of it and that together they are meant to mimic the sound of the human heartbeat. He added that the vibrations can be customized to allow for softer or stronger pulses as they respond to the volume and strength of the voice on the other end of the line. He added that his team has already tested the Hugvie in several environments and that people, especially senior citizens, tend to hug the little pillow bot when speaking with someone close to them.
This video is not supported by your browser at this time.
The idea behind the Hugvie is to add another dimension to the experience of speaking on the phone with someone in intimate ways; taking pillow talk to the next level if you will, providing that feeling of being there with that other person who really isn’t. The vibrations are meant to reproduce the sensations people would experience were they able to talk to one another with their faces, throats or chests touching, as people often do when lying down with one another while conversing.
Currently, the Hugvie is only available (in a variety of colors) to customers in Japan, but if interest spreads, as with any other consumer product, it will almost certainly be made available to customers elsewhere.
Via: DigInfo TV
© 2012 Phys.Org
"Roboticist creates Hugvie - Huggable vibrating pillow smartphone accessory." May 1st, 2012. http://phys.org/news/2012-05-roboticist-hugvie-huggable-vibrating.html
Posted by
Robert Karl Stonjek

Monday, April 9, 2012

Children perceive humanoid robot as emotional, moral being




Children perceive humanoid robot as emotional, moral beingA study participant and Robovie share a hug, one of the social interactions in the UW experiment. Credit: American Psychological Association
(PhysOrg.com) -- Robot nannies could diminish child care worries for parents of young children. Equipped with alarms and monitoring capabilities to guard children from harm, a robot nanny would let parents leave youngsters at home without a babysitter.
Sign us up, parents might say.
Human-like robot babysitters are in the works, but it's unclear at this early stage what children's relationships with these humanoids will be like and what dangers lurk in this convenient-sounding technology.
Will the robots do more than keep children safe and entertained? Will they be capable of fostering social interactions, emotional attachment, intellectual growth and other cognitive aspects of human existence? Will children treat these caregivers as personified entities, or like servants or tools that can be bought and sold, misused or ignored?
"We need to talk about how to best design social robots for children, because corporations will go with the design that makes the most money, not necessarily what’s best for children" said Peter Kahn, associate professor of psychology at the University of Washington. "In developing robot nannies, we should be concerned with how we might be dumbing down relationships and stunting the emotional and intellectual growth of children."
To guide robot design, Kahn and his research team are exploring how children interact socially with a humanoid robot. In a new study, the researchers report that children exchanged social pleasantries, such as shaking hands, hugging and making small talk, with a remotely controlled human-like robot (Robovie) that appeared autonomous. Nearly 80 percent of the children – an even mix of 90 boys and girls, aged 9, 12 or 15 – believed that the robot was intelligent, and 60 percent believed it had feelings.
The journal Developmental Psychology published the findings in its March issue.
The children also played a game of "I Spy" with Robovie, allowing the researchers to test what morality children attribute to the robot. The game started with the children guessing an object in the room chosen by Robovie, who then got a turn to guess an object chosen by the child.
But the humanoid robot’s turn was cut short when a researcher interrupted to say it was time for the interview part of the experiment and told Robovie that it had to go into a storage closet. Via a hidden experimenter's commands, Robovie protested, and said that it wasn't fair to end the game early. "I wasn't given enough chances to guess the object," the robot argued, going on to say that its feelings were hurt and that the closet was dark and scary.
When interviewed by the researchers, 88 percent of the children thought the robot was treated unfairly in not having a chance to take its turn, and 54 percent thought that it was not right to put it in the closet. A little more than half said that they would go to Robovie for emotional support or to share secrets.
But they were less agreeable about allowing Robovie civil liberties, like being paid for work. The children also said that the robot could be bought, sold and should not have the right to vote.
The findings show that the social interactions with Robovie led children to develop feelings for the robot and attribute some moral standing to it. This suggests that the interactions used in the study represent aspects of human experience that could be used for designing robots.
The researchers added that robot nanny design should also factor in how agreeable a robot should be with a child. Should a robot be programmed to give in to all the child's desires, play whatever game is demanded? Or should it push back, like Robovie did when the I Spy game ended early?
Kahn believes that as social robots become pervasive in our everyday lives, they can benefit children but also potentially impoverish their emotional and social development.
The National Science Foundation funded the study. Co-authors at UW are Nathan Freier, Rachel Severson, Jolina Ruckert and Solace Shen. Other co-authors are Brian Gill of Seattle Pacific University, and Hiroshi Ishiguro and Takayuki Kanda, both of Advanced Telecommunications Research Institute, which created Robovie.
More information: Learn more, watch videos at Kahn's lab website.
Provided by University of Washington
"Children perceive humanoid robot as emotional, moral being." April 6th, 2012. http://phys.org/news/2012-04-children-humanoid-robot-emotional-moral.html
Posted by
Robert Karl Stonjek