The NAO Next Gen robot from Aldebaran robotics is quite remarkable. sporting two cameras, 4 microphones, 8 pressure sensors, and a powerful processor, this robot can accomplish any number or tasks. In this video you will see him recognize his owner upon walking into the room with the robot’s head following his movements. He recognizes speech, grabs a rubber ball with his sophisticated hands and follows his owner around walking. The robot’s walk is quite good and it even has an impressive way to protect itself in case of a fall. If it has fallen, it gets back up with ease. Indeed, the future of personal robotics looks very promising.
Aldebaran Robotics Nao Next Gen Fully Programmable Humanoid Robot Review
Aldebaran Robotics' Nao robot has already received a few upgrades from both the company itself and other developers, but it now has a proper successor and now they look the wraps off its new and improved Nao Next Gen robot, touting features like a 1.6GHz Atom processor and dual HD cameras that promise to allow for better face and object recognition even in poor lighting conditions. The Nao Next Gen launches three years after the original Nao's debut and continues to target the same markets: research and educational institutions, personal wellbeing nd individual developers, who may apply to join the Nao Developer Program and Aldebaran Robotics says it has sold 2,000 units of Nao so far, though the goal for Nao Next Gen will surely be exponentially higher.
Aldebaran Robotics, the world leader in humanoid robotics has released its latest version of the NAO robot - NAO Next Gen. The power of NAO Next Gen, the new fully programmable humanoid robot that has the most extensive worldwide use, is opening up new perspectives and fields of application for its users. "The inception of this new generation of NAO robots means a lot to our company. We are proud to be in a position to provide our customers with endless options, whatever their sector. With NAO Next Gen coming of age, we shall be able to make it serve organizations that care for autistic children and those losing their autonomy. I created Aldebaran Robotics in 2005 with this aim: to contribute to humankind‟s well-being." Three years after it started selling its first NAO models, the company has sold 2,000 robots worldwide. Aldebaran Robotics has now released the latest generation of its programmable humanoid robots, which is intended for research, teaching and, more generally, for exploring the new area of service robotics. Stemming from six years of research and dialogue with its community of researchers and users, NAO Next Gen is capable of a higher level of interaction, thanks to increased computing power, improved stability and higher accuracy. Therefore, the latest version of the NAO robot widens considerably the range of research, teaching and application possibilities made available to specific user groups.
One of the NAO Next Gen's novel and most remarkable features is the fact that it is fitted with a new on-board computer, based on the powerful 1.6GHz Intel Atom processor, which is suitable for multi-tasking calculations. It also has two HD cameras that are attached to a field-programmable gate array (FPGA). This set-up allows the simultaneous reception of two video streams, significantly increasing speed and performance in face-and-object recognition, even under poor-lighting conditions. As well as its innovative features with respect to hardware, NAO Next Gen boasts a new, faster and more reliable vocal-recognition program called Nuance. This program is coupled with a new functionality known as word spotting, which is capable of isolating and recognizing a specific word within a sentence or a conversation. "On top of this new hardware version, we shall be delivering new software functionalities like smart torque control, a system to prevent limb/body collisions, an improved walking algorithm, and more. We have capitalized upon our experience and customer feedback in order to deliver the most suitable and efficient platform. In terms of applications especially at high-school level, we are focused on educational content, while, when it comes to improvements in personal well-being, we are working on developing specialized applications," explains Bruno Maisonnier. "We are also pursuing our goal to provide a NAO intended for individuals through the Developer Program - a community of programmers who are working with us today to invent tomorrow‟s personal robotics," adds the chairman of Aldebaran Robotics.
Hardware Platform
NAO is a programmable, 57-cm tall humanoid robot with the following key components:- Body with 25 degrees of freedom (DOF) whose key elements are electric motors and actuators
- Sensor network, including 2 cameras, 4 microphones, sonar rangefinder, 2 IR emitters and receivers, 1 inertial board, 9 tactile sensors, and 8 pressure sensors
- Various communication devices, including voice synthesizer, LED lights, and 2 high-fidelity speakers
- Intel ATOM 1,6ghz CPU (located in the head) that runs a Linux kernel and supports Aldebaran’s proprietary middleware (NAOqi)
- Second CPU (located in the torso)
- 27,6-watt-hour battery that provides NAO with 1.5 or more hours of autonomy, depending on usage
NAOqi
Building robotics applications is challenging- The building blocks of robotics applications include state-of-the-art, complex technologies, such as speech recognition, object recognition, and object mapping.
- Applications must be secure and able to run in constrained environments that have limited resources.
- NAOqi, the embedded NAO software, includes a fast, secure and reliable, cross-platform, distributed robotics framework that provides a solid foundation on which developers can leverage and improve NAO's functionality.
- NAOqi allows algorithms to share their APIs with others and helps prepare modules to run on NAO or remote PCs.
- Code development can take place in Windows, Mac OS, or Linux and be called from many languages, including C++, Python, Urbi, and .Net. Modules built on top of this framework offer rich APIs for interacting with NAO.
- NAOqi meets common robotics needs: parallelism, resources, synchronization, and events.
- n NAOqi, as in other frameworks, there are generic layers, but they are created especially for NAO. NAOqi allows homogeneous communication between different modules (motion, audio, and video), homogeneous programming, and homogeneous information sharing with ALMemory.
Motion
Omnidirectional walking:NAO's walking uses a simple dynamic model (linear inverse pendulum) and quadratic programming. It is stabilized using feedback from joint sensors. This makes walking robust and resistant to small disturbances, and torso oscillations in the frontal and lateral planes are absorbed. NAO can walk on a variety of floor surfaces, such as carpeted, tiled, and wooden floors. NAO can transition between these surfaces while walking.
Whole body motion:
NAO's motion module is based on generalized inverse kinematics, which handles Cartesian coordinates, joint control, balance, redundancy, and task priority. This means that when asking NAO to extend its arm, it bends over because its arms and leg joints are taken into account. NAO will stop its movement to maintain balance.
Fall Manager:
The Fall Manager protects NAO when it falls. Its main function is to detect when NAO's center of mass (CoM) shifts outside the support polygon. The support polygon is determined by the position of the foot or feet in contact with the ground. When a fall is detected, all motion tasks are killed and, depending on the direction, NAO's arms assume protective positioning, the CoM is lowered, and robot stiffness is reduced to zero.
Vision
NAO has two cameras and can track, learn, and recognize images and faces.- NAO sees using two 920p cameras, which can capture up to 30 images per second.
- The first camera, located on NAO’s forehead, scans the horizon, while the second located at mouth level scans the immediate surroundings.
- The software lets you recover photos and video streals of what NAO sees. But eyes are only useful if you can interpret what you see.
- That’s why NAO contains a set of algorithms for detecting and recognizing faces and shapes. NAO can recognize who is talking to it or find a ball or, eventually, more complex objects.
- These algorithms have been specially developed, with constant attention to using a minimum of processor resources.
- Furthermore, NAO’s SDK lets you develop your own modules to interface with OpenCV (the Open Source Computer Vision library originally developed by Intel).
- Since you can execute modules on NAO or transfer them to a PC connected to NAO, you can easily use the OpenCV display functions to develop and test your algorithms with image feedback.
Audio
NAO uses four microphones to track sounds, and its voice recognition and text-to-speech capabilities allow it to communicate in 8 languages.
Sound Source Localization:
Sound Source Localization:
One of the main purposes of humanoid robots is to interact with people. Sound localization allows a robot to identify the direction of sounds. To produce robust and useful outputs while meeting CPU and memory requirements, NAO sound source localization is based on an approach known as “Time Difference of Arrival.”
When a nearby source emits a sound, each of NAO’s four microphones receives the sound wave at slightly different times.
For example, if someone talks to NAO on its left side, the corresponding sound wave first hits the left microphones, then the front and rear microphones a few milliseconds later, and finally the right microphone.
These differences, known as interaural time difference (ITD), can then be mathematically processed to determine the current location of the emitting source.
By solving the equation every time it hears a sound, NAO can determine the direction of the emitting source (azimuthal and elevation angles) from ITDs between the four microphones.
This feature is available as a NAOqi module called ALAudioSourceLocalization; it provides a C++ and Python API that allows precise interactions with a Python script or NAOqi module.
Possible applications include::
n robotics, embedded processors have limited computational power, making it useful to perform some calculations remotely on a desktop computer or server.
This is especially true for audio signal processing; for example, speech recognition often takes place more efficiently, faster, and more accurately on a remote processor. Most modern smartphones process voice recognition remotely.
Users may want to use their own signal processing algorithms directly in the robot.
The NAOqi framework uses Simple Object Access Protocol (SOAP) to send and receive audio signals over the Web.
Sound is produced and recorded in NAO using the Advanced Linux Sound Architecture (ALSA) library.
The ALAudioDevice module manages audio inputs and outputs.
Using NAO’s audio capabilities, a wide range of experiments and research can take place in the fields of communications and human-robot interaction.
For example, users can employ NAO as a communication device, interacting with NAO (talk and hear) as if it were a human being.
Signal processing is of course an interesting example. Thanks to the audio module, you can get the raw audio data from the microphones in real time and process it with your own code.
Sound Source Localization:
Sound Source Localization:
One of the main purposes of humanoid robots is to interact with people. Sound localization allows a robot to identify the direction of sounds. To produce robust and useful outputs while meeting CPU and memory requirements, NAO sound source localization is based on an approach known as “Time Difference of Arrival.”
When a nearby source emits a sound, each of NAO’s four microphones receives the sound wave at slightly different times.
For example, if someone talks to NAO on its left side, the corresponding sound wave first hits the left microphones, then the front and rear microphones a few milliseconds later, and finally the right microphone.
These differences, known as interaural time difference (ITD), can then be mathematically processed to determine the current location of the emitting source.
By solving the equation every time it hears a sound, NAO can determine the direction of the emitting source (azimuthal and elevation angles) from ITDs between the four microphones.
This feature is available as a NAOqi module called ALAudioSourceLocalization; it provides a C++ and Python API that allows precise interactions with a Python script or NAOqi module.
Possible applications include::
- Human Detection, Tracking, and Recognition
- Noisy Object Detection, Tracking, and Recognition
- Speech Recognition in a specific direction
- Speaker Recognition in a specific direction
- Remote Monitoring/Security applications
- Entertainment applications
n robotics, embedded processors have limited computational power, making it useful to perform some calculations remotely on a desktop computer or server.
This is especially true for audio signal processing; for example, speech recognition often takes place more efficiently, faster, and more accurately on a remote processor. Most modern smartphones process voice recognition remotely.
Users may want to use their own signal processing algorithms directly in the robot.
The NAOqi framework uses Simple Object Access Protocol (SOAP) to send and receive audio signals over the Web.
Sound is produced and recorded in NAO using the Advanced Linux Sound Architecture (ALSA) library.
The ALAudioDevice module manages audio inputs and outputs.
Using NAO’s audio capabilities, a wide range of experiments and research can take place in the fields of communications and human-robot interaction.
For example, users can employ NAO as a communication device, interacting with NAO (talk and hear) as if it were a human being.
Signal processing is of course an interesting example. Thanks to the audio module, you can get the raw audio data from the microphones in real time and process it with your own code.
HTML Comment Box is loading comments...