Namaste, I'm Aadesh
I'm passionate Machine Learning Engineer with an extensive experience in designing scalable Robotics Vision, Computer Vision, and NLP systems. I am looking for a full-time engineering role in autonomous vehicle industry.
I'm passionate Machine Learning Engineer with an extensive experience in designing scalable Robotics Vision, Computer Vision, and NLP systems. I am looking for a full-time engineering role in autonomous vehicle industry.
MACHINE LEARNING ENGINEER WHO WANTS TO DESIGN RELIABLE AND SAFE AUTONOMOUS VEHICLES.
Experienced in building massive computer vision and NLP datasets using web scraping and courd sourcing tools.
Proficient in performing visual anaysis of data using Pandas and Matplotlib.
Seasoned in ML models ranging from classical pattern regonition algorithms to state-of-the-art deep learning models.
Trained in MLOps principles cycle especially focused on model deployment, tracking, and monitoring.
Experienced with constrained optimization problems in robotics using various algorithms such as ABC, PSO, A*, RRT*, and others.
Expert in genetic algorithms, evolution strategies, differential evolution and estimation of distribution algorithms.
Proficient in robotics controllers (state-machines, neural-networks, graphs, Behavior Trees, and others).
Authored various peer-reviewed papers for designing resilient swarm.
Wrote controllers for various robotic platform like Sphero, Cozmo, Baxter, Turtlebot and others.
Graduated from self-driving car specilization (Coursera and Udacity).
Experience in various control strategies such as model predictive control, PID, geometric control, imitation learning based control, and RL based control.
Proficient in sensor fusion by aggregrating data from various sensors such camera, LIDAR, RADAR, SONAR, IMU, GNNS, and others.
Familiar with various simultaneous localization and mapping (SLAM) strategies for state estimation.
Seasoned with safe and reliable Behavior planning that integrates linear temporal logic.
Participated in the research of goal formulation and verification in AI agents.
Participated in the research of evolution of swarm behaviors using grammatical evolution.
Task-based chatbots tend to suffer from either overconfidence or ignorance — giving a response that is confidently wrong or completely uncertain (e.g.“I don’t know”). A chatbot that could identify the source of its uncertainty and ask a clarifying question would lessen the burden of query reformulation for the user. We introduce a two-turn query-response-query (QRQ) task, in which a user queries the chatbot and the chatbot must ask a clarifying question, which results in a second user query that clarifies the user’s intent. We evaluate performance in two ways: 1) by the perplexity of the response on the Taskmaster-2 dataset, and 2) by information acquisition between the first user query and the second user query, as measured by an intent classifier. We train a variety of architectures for these objectives, including a supervised encoder-decoder transformer and an unsupervised system trained to acquire more information from the second query than the first. Although the unsupervised system does not currently improve on baseline, there are positive indications that a similar approach could yield positive results in future.
It is crucial not only to understand the specialized subsystem of an autonomous vehicle like lane detection, vSLAM, and traffic light detection to build a safe and reliable self-driving car, but understanding the subsystem interaction with each other is equally important. So for our CS704R project, first, we independently implement those specialized subsystems. Second, we combine those modules to build a minimal self-driving car agent in ROS. Finally, we test our agent in a simulated highway environment. Our minimal agent was successful in driving on the highway track. For this project, we have modified the Udacity self-driving car simulator to work with latest ROS (Noetic) and Python 3
PID controller is quite a popular algorithm used in control theory. But it’s difficult for a PID controller to over come latency that is prevalent in the real-world systems. We describe an MPC that adapts well in presence of latency in the system. MPC is an advanced method of process control that is used to control a system while satisfying a set of constraints. MPC model has been successfully applied to vehicles with bicycle models and is a popular choice with self-driving cars. First, a brief description of the stable, controllable, and observable properties of the system is provided. Followed by the system model, constraints, and cost function. We implement the MPC controller to a car simulator and evaluate its performance. This project was a part of the ECEN 773 class.
Cryptography and data science research grew exponential with the internet boom. Legacy encryption techniques force users to make a trade-off between usability, convenience, and security. Encryption makes valuable data inaccessible, as it needs to be decrypted each time to perform any operation. Billions of dollars could be saved, and millions of people could benefit from cryptography methods that don’t compromise between usability, convenience, and security. Homomorphic encryption is one such paradigm that allows running arbitrary operations on encrypted data. It enables us to run any sophisticated machine learning algorithm without access to the underlying raw data. Thus, homomorphic learning provides the ability to gain insights from sensitive data that has been neglected due to various governmental and organization privacy rules.In this paper, we trace back the ideas of homomorphic learning formally posed by Ronald L. Rivest and Len Alderman as “Can we compute upon encrypted data?” in their 1978 paper. Then we gradually follow the ideas sprouting in the brilliant minds of Shafi Goldwasser, Kristin Lauter, Dan Bonch, Tomas Sander, Donald Beaver, and Craig Gentry to address that vital question. It took more than 30 years of collective effort to finally find the answer “yes” to that important question.
Animals such as bees, ants, birds, fish, and others are able to efficiently perform complex coordinated tasks like foraging, nest-selection, flocking and escaping predators without centralized control or coordination. These complex collective behaviors are the result of emergence. Conventionally, mimicking these collective behaviors with robots requires researchers to study actual behaviors, derive mathematical models, and implement these models as algorithms. Since the conventional approach is very time consuming and cumbersome, this thesis uses an emergence-based method for the efficient evolution of collective behaviors.Our method, Grammatical Evolution algorithm for Evolution of Swarm bEhaviors (GEESE), is based on Grammatical Evolution (GE) and extends the literature on using genetic methods to generate collective behaviors for robot swarms. GEESE uses GE to evolve a primitive set of human-provided rules, represented in a BNF grammar, into productive individual behaviors represented by Behavior Tree (BT). We show that GEESE is generic enough, given an initial grammar, that it can be applied to evolve collective behaviors for multiple problems with just a minor change in objective function.Our method is validated as follows: First, GEESE is compared with state-of-the-art genetic algorithms on the canonical Santa Fe Trail problem. Results show that GEESE outperforms the state-of-the-art by a)~providing better solutions given sufficient population size while b)~utilizing fewer evolutionary steps. Second, GEESE is used to evolve collective swarm behavior for a foraging task. Results show that the evolved foraging behavior using GEESE outperformed both hand-coded solutions as well as solutions generated by conventional Grammatical Evolution. Third, the behaviors evolved for single-source foraging task were able to perform well in a multiple-source foraging task, indicating a type of robustness. Finally, with a minor change to the objective function, the same BNF grammar used for foraging can be shown to evolve solutions to the nest-maintenance and the cooperative transport tasks.
Current computer vision algorithms using Neural Nets with softmax function can only classify objects in between the labels used for training. If we provide the algorithm with the entirely different image still it will try to classify the image for the labels it knows. It would be amazing if these algorithms could distinguish between the images which are similar to what it has seen before and which are completely different. This problem is known as Open Set recognition problem. Addressing this problem would be tremendous benefits for computer vision as machines will be able to classify the objects more accurately and more robust to fooling images as well as adversarial images. We implement OSDN algorithm using Weibull fitting in the penultimate layers of Neural Nets to address the issue with Open Set recognition problem.
Social animals cooperatively transport object which is many times bigger than themselves effectively. Mimicking those behaviors on real robots will have diverse applications in engineering, health care and search and rescue. In this paper, we define different categories of cooperative transport problems and discuss different tools and techniques to tackle them. We then show that occlusion-based cooperative transport techniques are effective when the object is convex and there are enough agents to overcome frictional force. Results show that even with only two robots, the occlusion-based technique is able to transport objects 60% of the time.
Social animals cooperatively transport object which is many times bigger than themselves effectively. Mimicking those behaviors on real robots will have diverse applications in engineering, health care and search and rescue. In this paper, we define different categories of cooperative transport problems and discuss different tools and techniques to tackle them. We then show that occlusion-based cooperative transport techniques are effective when the object is convex and there are enough agents to overcome frictional force. Results show that even with only two robots, the occlusion-based technique is able to transport objects 60% of the time.
Creating catchy slogans is a demanding and clearly creative job for ad agencies. The process of slogan creation by humans involves finding key concepts of the company and its products, and developing a memorable short phrase to describe the key concept. We attempt to follow the same sequence, but with an evolutionary algorithm. A user inputs a paragraph describing describing the company or product to be promoted. The system randomly samples initial slogans from a corpus of existing slogans. The initial slogans are then iteratively mutated and improved using an evolutionary algorithm. Mutation randomly replaces words in an individual with words from the input paragraphs. Internal evaluation measures a combination of grammatical correctness, and semantic similarity to the input paragraphs. Subjective analysis of output slogans leads to the conclusion that the algorithm certainly outputs valuable slogans. External evaluation found that the slogans were somewhat successful in conveying a message, because humans were generally able to select the correct promoted item given a slogan.