This page is intended for users in Japan(English). Go to the page for users in United States.

AAAI 2019 Conference Report

Author: Annapureddy Ravinithesh Reddy, Data Scientist

■ About the Conference

The Thirty-Third AAAI Conference on Artificial Intelligence (AAAI 2019) happened from January 27th to February 1st, 2019 at Hilton Hawaiian Village, Honolulu, Hawaii. The conference is known for its high-quality research work in Artificial Intelligence (AI). The conference had various tracks that are not limited to Machine Learning, Deep Learning, Game Theory, Natural Language Processing, Reinforcement Learning, Computer Vision and Video Analytics.

The audience have gathered from around the world and they included persons from the Industry, Professors, Senior Researchers, PhD’s, Graduate and undergraduate students.

■ Learning Objectives

The purpose of the AAAI conference is to promote research in artificial intelligence (AI) and scientific exchange among AI researchers, practitioners, scientists, and engineers in affiliated disciplines. This is an international conference with papers and speakers coming from almost around the globe. So, it would be great connect for us to know about the kind of research happening at other places, the types of problems researchers are trying to solve, and we can think of adapting solutions to our problems.

We aim to learn about the following by attending the conference

1. Computer Vision: 3D Object Reconstruction

2. 3D Shape retrieval and Classification

3. Latest methods to process huge data

4. Recent advances in representation of 3D data.

■ Workshop

・Games and Simulations for ​Artificial Intelligence

Recently simulations have become a powerful tool for AI research. We have seen that many algorithms are tested on how well they perform on some famous games like chess and Go. Looking from the other side, we can use these simulations to generate data to train models. This workshop provided insights into latest happenings in this arena.

The workshop started with an introduction talk by Mr. Danny Lange, VP of AI and Machine Learning at Unity. According to him, the current systems which are called AI systems are not really AI systems but a kind of expert systems because intelligence means the ability to acquire and apply knowledge and skills and the current systems do not acquire any knowledge but operate only on previously available knowledge that they have been trained on. He also believes that it is almost difficult for one to create intelligent systems keeping in mind the ground realities. For example, consider building a driverless car and when it is deployed on a street and an accident occurs leading to an injury/ death of a person. It is fatal and cannot be reversed. However, if we do this in a simulation environment then there is not any physical loss and in a virtual environment, we can create more use cases without a need for them to occur in real life leading to loss. Not only this but game engines also provide scale up from 30 FPS to 3-6,000 FPS and a server performance of 10-20 M FPS per hour. These game engines have a physics engine which makes the possibility to simulate the real world. One has to also remember that there are certain challenges like creating a faster and precise physics engine; procedure for efficient data collection; deterministic execution and creating rich graphics and locomotion of an agent in simulations. Considering tradeoffs between these factors one can design their simulation environment and train their machine/deep learning models.

After him, Mr. Yuandong Tian from Facebook AI Research spoke on “Building Scalable Framework and Environment for Reinforcement Learning”. The talk started by defining Reinforcement Learning (RL) in simple terms. Currently games serve as a test bed for RL; we have seen many examples for this like RL agents playing Chess, Go, StarCraft II, Dota 2 etc. The pros of using games as test bed are as follows: They provide infinite supply of fully labelled data, they are controllable and replicable, have low cost per sample, faster than real-time, less safety and ethical concerns and finally they have complicated dynamics with simple rules. On the other hand, the cons of using them are it requires a good simulator, lot of data/resources, conversion from simulation to real world and finally what is the usable application from them. To explore the applications in real world and apart for games RL has been used for optimization specifically for the tasks of solving the travelling salesman problem, Job Scheduling, Vehicle Routing, Bin Packing, Protein Folding and in model search. He has presented few examples where RL was used for real applications like online job scheduling, expression simplification. In case of Online Job Scheduling their NeuRewriter algorithm has the least average slowdown among various other existing popular scheduling algorithms and this did not change even with the increase in number of resources unlike others. Important observation to be noticed is that their model was able to generalize to different job distributions.

The usefulness of the above talk to our products is as following: Reinforcement Learning is shown to produce better results compared to the traditional learning methods and as mentioned in talk we can apply it for two similar problems that we are trying to solve i.e. bin packing and Travelling Salesman Problem for optimizing the order picking.

One of the important problems we or many others want to solve is training a robot arm to perform operations like human. When we say like human, we mean taking caring of factors like amount of pressure to be applied, angle at which it is to be picked and position where it has to be placed. For this we need heavy amounts of data for training, that is difficult to generate in real world. As mentioned above resorting to simulations is a better choice. The current simulators available are good in their own world and when converting them to real world they fail as the engines in those simulators are not designed with physical world constraints. Presentation of Mr. Shital Shah from Microsoft Research gave more understanding to this issue and its solution. To solve this problem the Microsoft Research team has designed AirSim, a simulator that presents a real world. The highlights of this simulator are sensors like GPS, Barometer, Magnetometer, IMU; Rendering engine that gives options for multi vehicle, set time of day, plugin to add more environments; Python and C++ API’s and finally from computer vision perspective the depth information, segmentation, configurable camera settings, Noise simulation to mention a few. The project is made available to public.

We can take advantage of this simulation software to create data for the training of the robot arm. It is worth to note that the AirSim has 13 environments including a warehouse along with equipment in operating it. This warehouse environment is created as a replica of one of the warehouses of Toyota.

Photo (Karu on right side)

GROUND株式会社's job postings
1 Likes
1 Likes

AI

Weekly ranking

Show other rankings