Since Czech playwright Karel Capek popularised and, indeed, named the concept of the robot in his 1920 science-fiction play, RUR (Rossum’s Universal Robots) – the word is derived from the Czech word ‘robota’, which means labour – it has exerted a fascination on both the popular and the scientific, and on engineering and technological minds. The robot quickly became a mainstay of science fiction, sometimes benign (as with Robby the Robot in the film Forbidden Planet or R2D2 and C3PO from the Star Wars series), sometimes hostile.
The first real working robot, however, bore no resemblance to the humanoid robots beloved of science fiction. This was Unimate, developed by the Unimation (Universal Automation) company in the US, which was specifically founded to manufacture robots for industry. Unimate, which entered service on a General Motors assembly line in 1961, was the forerunner of all today’s industrial robots and, being in the form of a large mechanical arm, also set the format most such robots still follow. (Unimation was later bought by Westinghouse and subsequently sold to the Franco-Swiss Stäubli group, which incorporated it into its robotics division; the name Unimation is no longer used.)
However, while industrial robots are real, they are not, in most people’s minds, real robots. But a new area of robotics has developed, known as field robotics (the word robotics was coined by US science fiction writer Dr Isaac Asimov in 1942).
“In science fiction, robots have true artificial intelligence; an unlimited working environment, like humans; can learn [are not preprogrammed] and can work cooperatively alongside humans. In reality today, the traditional ‘home’ of robotics is a production line, with repetitive work, a fully controlled environment and no interaction with humans, because it is not safe. Such robots are unaware of what is around them. They just do their tasks. Field robotics is relatively new. Field robots are in between industrial robots and science fiction. They operate in unstructured environments but in limited roles. They can act autonomously. Some can operate with people safely. Some have limited learning ability,” explains Council for Scientific and Industrial Research (CSIR) mining robotics project manager Liam Candy.
Rapid progress has been made in field robotics technology over the past decade, given a huge impetus by the demands of war. For example, the US Army will, in the last quarter of this year, deploy to Afghanistan, for operational testing, four Lockheed Martin Squad Mission Support Systems (SMSSes).
The SMSS is a six-wheel all-terrain robot logistics support vehicle, which can carry a payload of about 0.5 t over a maximum range of some 200 km. It can operate using supervised autonomy, or be controlled by voice, teleoperation, or driven manually. With supervised autonomy, it can follow preplanned routes or travel to way points. It can also follow behind humans. By the end of last year, the SMSS had gained four US government safety releases, allowing it to operate with humans. By the end of this year, this should have increased to six such safety releases.
Lockheed Martin is also looking at developing intelligence, surveillance, target acquisition and reconnaissance and armed versions of the SMSS. It is also being evaluated for missions such as firefighting, emergency medical services and emergency power generation.
This is just one, albeit an advanced, example of what is happening. The US and other militaries are also interested in auto-nomous trucks for use in rear area road-bound logistics supply convoys, for example.
US technology market research company ABI Research has calculated that $5.8-billion was spent on military robotics worldwide last year and that this figure will increase by 70% to more than $8-billion in 2016. (These figures include unmanned air vehicles, unmanned ground vehicles and unmanned under- water vehicles, alternatively phrased robot aircraft, robot vehicles and robot submarines.) In addition, there is less but still significant civilian investment in field robotics, for cargo handling, mining and agricultural applications, among others.
South Africa cannot remotely match the scale of these investments – military or civil. Yet, the country cannot ignore field robotics either. It is becoming too important, too dynamic – a field relevant to many areas of human endeavour.
Consequently, the CSIR is undertaking research into field robotics through the formation of the Mobile Intelligent Autonomous Systems (Mias) group. “This is an emerging research area for the CSIR, so it is different from a competence area,” explains Mias group leader Dr Simukai Utete. “Our group targets niche areas which address national needs – niche areas which are of relevance to [our] society. We are very concerned about capacity development in robotics.”
Mias has four research areas – perception, planning, navigation and machine learning. The group’s projects are carried out in the areas of mining robotics, ‘mule’ robotics (the SMSS is an example of a mule system), intelligent manipulation and active vision for autonomous systems.
“We develop the autonomy (intelligence),” she says. “We often buy in the platforms. We are very much about codevelopment with other research and industrial partners. Our research is not focused on sensor design. We take data from different sensors and integrate it using sensor fusion for different applications.” A lot of the sensors employed (bought in) by the CSIR Mias team are very expensive, but it is expected that, as the global demand for them increases, the costs of the sensor packages will fall.
The nickname ‘mule’ for robot vehicles like the SMSS derives from both the term ‘multipurpose logistics equipment’ and from the fact that they are intended as twenty-first century counterparts to the pack mules of the past that accompanied and supported mining prospectors, fur trappers, soldiers and explorers.
“An autonomous system is one which can perform certain desired tasks and operate independently – for example, a Mars rover or robot vacuum cleaners,” elucidates CSIR autonomous mule project manager Deon Sabatta. “But it is difficult to define generic solutions for autonomous systems because of the unstructured environments they face. There is also the question of their robustness and the issue of the social impli- cations.” Social implications include the impact of autonomous systems on jobs, which is an extremely sensitive issue in South Africa.
Sabatta uses developments in the global motor industry to trace the introduction and development of autonomy in motor cars. In 1957, cruise control technology was developed; in 1964, automatic headlights and climate control; in 1966, antilock brakes; in 1984, automatic windscreen wipers; in 1995, stability control; in 1997, adaptive cruise control; in 2004, the self-parking car: and in 2009, lane-keeping assistance. Each of these developments reduced the effort needed to drive a car safely and reduced the number of decisions a motorist needed to take. Further, from 2007, prototypes of fully autonomous vehicles began to appear.
“Robustness is starting to improve,” he reports. “Fully autonomous, [or civilian], vehicles are likely to become widespread in the 2030 to 2050 period. The current focus is on partial autonomy – systems that can perform desired tasks and operate independently some of the time or under supervision.”
In the South African context, the CSIR is looking at applying such partially auto-nomous systems to repetitive jobs, high- precision jobs, undesirable jobs and to improve efficiency.
So far, the CSIR has effectively concluded its first Mias robot vehicle project, designated the Autonomous Rover. This was a vehicle which could navigate by Global Positioning System (GPS) along a predefined route and carry out point-to-point route planning. Such a vehicle would have practical applications in certain niches, such as in agriculture, for spraying and harvesting, or for ore transport in mining, or in allowing the creation of truck convoys which would need only a single driver.
The CSIR team is now developing its next Mias vehicle project, the Autonomous Mule, which will be more ambitious and complex. It will have to be able to follow selected people or vehicles, drive itself back along routes it has already traversed and navigate itself to locations it has not previously visited. This requires the development of a series of technologies and capabilities, on which the CSIR Mias team is currently working.
“A robot needs to be aware of its surroundings to behave appropriately,” highlights Sabatta. The robot must be able to model the environment around it, and the CSIR is developing both two- and three-dimensional environmental modelling systems for robots.
The robot must also know where it is – ‘localisation’. “We are looking at GPS-based localisation,” he states. “But GPS is not always accurate. Our focus is on improving the performance of GPS-based systems.” For a mule system that would operate in a geographically restricted area – a factory site, say – the CSIR team is also looking at vision-based localisation and mapping. Basically, they go around and take photographs of the area concerned to create a database of images for the robot. The robot would then compare what it sees with this database and so determine where it is.
An autonomous vehicle also needs to be able to identify and track “targets” – which might be animate (particularly human) or inanimate – which it might either follow or avoid, as required. It must also know where it needs to go and be able to plan the route it will follow to reach that goal. It is also necessary to ensure that, while this is happening, everything goes according to plan.
There is one more critical issue. “An auto- nomous system isn’t much good if it can’t communicate with its user,” points out Sabatta. “We have a focus on creating intuitive and easy-to-understand user interfaces and visualisations.”
Sabatta expects the Autonomous Mule project to run for one to two years, to be followed by a commercialisation phase and the development of specific vehicles for specific tasks.
SEEING AND HOLDING
As has already been pointed out, a robot must be able to perceive, in some way, its environment if it is to function safely (industrial robots cannot perceive their environment, which is why people cannot work alongside them). Consequently, the Mias programme has a team dedicated to this issue.
“Vision is one of the important things in robotics, which cuts across many projects,” says vision systems senior researcher Fred Senekal.
“Intelligent agents show the ability to learn, but they need to be able to perceive their environments. Our main research in terms of perception is computer vision. We need to create models or descriptions of the environment from imagery captured by sensors on the robot that will allow it to navigate around that environment and execute useful functions in that environment. Our current research problems are to create three-dimensional and two-dimensional models of the robot’s environment; enable the robot to recognise objects in its environment that are of interest to it for interaction, navigation and decision-making; and allow the robot to track objects in its environment. We deal more with the soft elements of robotics, not the hard elements – the mechatronics.” Lacking the resources to do research into all possible forms of perception, the team has focused its attention on using cameras and laser scanners, with which they have gained “quite a bit of experience”.
Effective vision involves the robot being able to recognise objects and images and track objects based either on features possessed by the object (including patterns or words on its surfaces) or the shape of the object (which often varies, depending on perspective). Further, the team is researching stereo correspondence – that is, using two or more cameras to achieve triangulation and so determine the depth of an object in the field of vision. This is complemented by work on using laser scanners to ‘map’ the environment around it.
Ideally, a robot needs to be able to identify an object when the object is moving and changing direction as well as when it is stationary. Further, a robot vehicle should be able to recognise road signs, perform road segmentation (identify and negotiate intersections, Y-junctions and so on) and, indeed, recognise traffic lanes. All this requires that the robot understands what it sees – a process known as scene understanding. In this area, Mias is working with the CSIR Meraka Institute, which is focused on information and communications technology.
It would also be good if a field robot, once it had reached a desired location, could then do something useful, such as grab, hold, lift and manipulate objects. Now, of course, industrial robots do this all the time, but field robots would have to do these things with unfamiliar objects, in close proximity to humans, and so do them in a safe manner – hence, another element in the Mias research – the intelligent manipulator project.
This project was stimulated by a European programme, the SME Robot. This is actually aimed at small and medium-size enterprises (hence, SME) which cannot afford dedicated robot-only workspaces, and so need robots that can safely operate alongside humans, and which can also understand human-style instructions, such as voice, gesture and graphics.
The consortium developing the SME Robot has some 25 members, including the Fraunhofer Institute, the Gesellschaft für Produktionssysteme, ABB, Kuka, Comau, Reis Robotics and Güdel.
“Our project is basically looking at [the SME Robot] idea but to go quite a bit further in terms of what we can make,” reports intelligent manipulator project manager Jonathan Claassens. “How do we make a manipulator intelligent? We need planning – the robot needs to be able to adapt to its environment – computer vision and the ability to interface with humans.”
As with the SME Robot, the intelligent manipulator can be applied to many roles, and not only as an attachment to a field robot. Vision is obtained by fitting the robot with a time-of-flight camera, which provides three-dimensional surface information of an object in real time. “Our focus on computer vision is robustness,” he states.
Areas being researched include active vision – the robot views an object from several different angles in order to accurately identify it; pose estimation – correctly determining the stance of a human working with or close to the robot; and imitation planning – that is, the robot is shown what to do and then imitates what has been demonstrated to it. (The showing part currently involves physically holding the robot arm and then moving it through the desired motions more than once.)
“Interface is a new thrust,” states Claassens. “Several packages are in development for visualisation and plan editing. But none are at a point where you could put the robot in front of a worker.”