Introduction

The IRSA (Intelligent Robotics and Automation) team is excited to present their home robot designed for the prestigious RoboCup@Home competition, slated to take place in 2024. This esteemed event serves as a showcase for international teams to demonstrate their cutting-edge robotics and artificial intelligence (AI) capabilities. As a participant, IRSA is committed to excellence and aims to make a significant impact in the competition. To achieve this goal, the team has collaborated with two innovative companies and a renowned university research group, bringing together a diversity of expertise and resources. The IRSA@Home project is divided into several specialized sections, each focused on a critical aspect of the robot's development and performance: .

1. Sensing and Perception: This section is responsible for equipping the robot with a broad range of sensors and perceptual abilities, enabling it to detect and interpret its environment with precision. The team leverages advanced computer vision and machine learning algorithms to interpret sensor data and make informed decisions.

2. Manipulation and Locomotion: The manipulation and locomotion sections focus on developing the robot's grasping, handling, and movement capabilities. The team designs and integrates advanced grippers, sensors, and actuators to enable the robot to perform various tasks, such as picking up and placing objects, navigating through tight spaces, and interacting with its environment.

3. Navigation and Mapping: This section is dedicated to empowering the robot with advanced navigation and mapping capabilities. The team develops sophisticated algorithms to enable the robot to create detailed maps of its environment, navigate through unfamiliar areas, and localize itself with high accuracy.

4. Human-Robot Interaction: The human-robot interaction section focuses on designing intuitive and natural interfaces that allow humans to interact with the robot in a seamless manner. The team develops advanced gesture recognition, voice recognition, and facial recognition systems to enable users to communicate with the robot and issue commands.

5. System Integration and Testing: This final section brings all the individual sections together, integrating them into a cohesive system and ensuring that the robot functions as a whole. The team conducts extensive testing and debugging to ensure the robot operates reliably and efficiently.

Download Link Clip_1

Download Link Clip_2

Download Link Clip_3

Download Link Clip_4

Download Link Clip_5

Download Link Clip_6

Download TDP

By leveraging the collective strengths of the IRSA@Home team, the robot is poised to excel in the RoboCup@Home competition, demonstrating advanced capabilities and setting new standards for home robotics. With a strong focus on competition participation, continuous learning, and improvement, IRSA is committed to further showcasing the potential of intelligent robotics and attracting the next generation of innovators to this exciting field. The IRSA (Intelligent Robotics and Automation) team is proud to present their comprehensive research and development project, led by Mojtaba Farzaneh in the Facial Recognition Department. The project is a collaboration between the following groups:

• Image Processing Research Group:

This group is dedicated to advancing the state-of-the-art in image processing techniques, particularly in the areas of facial recognition, object detection, and image segmentation. Led by Mojtaba Farzaneh, the group has made significant contributions to the field, including the development of a deep learning-based facial recognition system with a high degree of accuracy.

• Robotics Laboratory:

Under the supervision of Dr. Ebrahim Dashty, this laboratory focuses on the design and development of advanced robots capable of performing a wide range of tasks, including navigation, manipulation, and inspection. The laboratory has developed several prototypes, including a humanoid robot with advanced motion planning and control capabilities.

• Mapping and Navigation Group:

This group is responsible for creating detailed maps of the robot's environment and developing algorithms for navigation and localization. Led by Mohammad Sadegh Chakri, the group has developed a sophisticated mapping system that allows the robot to navigate through unfamiliar areas with high accuracy.

• Robot Mechanics Group:

This group focuses on the design and development of advanced robotic systems, including the mechanical and electrical components required for locomotion and manipulation. Led by Ali Saleh, the group has developed several innovative robotic platforms, including a robotic arm with advanced grasping capabilities.

• Control and Electronics Group:

This group is responsible for the design and development of control systems and electronics for the robot. Led by Mustafa Nouri, the group has developed a highly efficient control system that allows the robot to operate with a high degree of accuracy and reliability.

• Sound Processing Group:

Under the supervision of Atefeh Heidaryan, this group focuses on developing advanced algorithms for sound processing and recognition. The group has made significant contributions to the field, including the development of a deep learning-based sound recognition system with a high degree of accuracy.

• Control and Guidance Group:

Led by Hassan Ali Janpour, this group focuses on developing advanced control and guidance algorithms for the robot, including trajectory planning, motion control, and obstacle avoidance. The group has developed several innovative algorithms that enable the robot to operate with a high degree of precision and safety. Through their collaboration and dedication, the IRSA team has made significant advancements in the field of robotics and automation, positioning themselves as leaders in the field. Their research and development projects showcase the potential of intelligent robotics and highlight the opportunities for future growth and innovation.

Robot Parts

The robot consists of three primary components:

Basic Part:

• Four DC Hoverboards motors with PWM signals (for movement and maneuverability)

• 48V DC battery (for powering the robot's systems)

• Two Arduino units (for signal transmission and robot control)

Middle Part:

• Five ultrasonic sensors (for detecting objects and measuring distances)

• Two precise laser distance sensors (for precise distance measuring)

• One microwave sensor (for detecting motion within a 5-meter range around the robot)

Upper Part:

o Two cameras (for advanced vision and automation capabilities):

 One for face processing (utilizing Kinect2 technology): This camera is used for facial recognition, detection, and identification. It enables the robot to recognize and interact with specific individuals.

 One for mapping, routing, and image processing (5-megapixel camera): This camera provides high-resolution images and detailed information about the robot's environment. It aids in obstacle detection, navigation, and the creation of detailed maps.

By combining these components, the robot is able to perceive and interact with its environment in a highly advanced and nuanced manner, enabling it to perform a wide range of tasks with precision and accuracy.

Main Controller Circuit:

The core of the control system in this setup is built upon an ARM Cortex M4 microcontroller, which serves as the development circuit responsible for orchestrating various tasks crucial for the robot’s operation. This microcontroller is adept at handling data from an array of sensors that are likely distributed throughout the robot's structure. These could include, but are not limited to, ultrasonic sensors, infrared sensors, or camera modules, which gather a comprehensive range of data about the robot's environment. Once the ARM Cortex M4 microcontroller collects the data from these sensors, it undertakes the complex process of creating a 'virtual image network.' This network is essentially a digital representation of the physical world that the robot navigates, constructed from the synthesis of sensory data. This virtual representation is essential for the robot as it serves as its perception of the surroundings, allowing it to make informed decisions and interact with its environment intelligently.

Processing image data is another critical function of the main control circuit. It likely involves algorithms that can analyze visual data to recognize objects, patterns, and maybe even to perform more complex operations, such as depth perception, edge detection, or motion tracking. This image processing capability is fundamental for tasks such as obstacle avoidance, targeted movement, or any function that requires visual identification.

Furthermore, the ARM Cortex M4 circuit is equipped to receive motion commands. These commands are probably generated from a separate circuit dedicated to image processing and mapping. After processing the image data, this circuit can identify where the robot is and where it needs to go, then translate this information into motion commands that instruct the robot on how to move.

These motion commands are fed to the ARM Cortex M4 circuit, which then uses the 'network' created by the sensors to guide the robot. It controls the robot’s movements with precision, ensuring actions are taken based on real-time sensory inputs and the pre-processed image and mapping data. The movement control could encompass a range of activities such as starting, stopping, changing direction, and adjusting speed, enabling the robot to navigate through its environment autonomously and perform designated tasks with minimal or no human intervention. The ARM Cortex M4’s roles, from receiving sensor data to controlling robot actions, illustrate a sophisticated and well-integrated control system capable of autonomous robotic operation.

Image Processing and Robot Guidance:

In this segment of the robot's design, there is a dual-camera configuration positioned strategically on the top portion of the robot. This placement is likely chosen for optimal field of view and to simulate a vantage point similar to human eye-level, which is particularly useful for tasks that require a clear and unobstructed view of the robot's surroundings.

The first camera in this setup is perhaps a standard RGB camera ideally suited for facial recognition tasks. This means that the camera is capable of detecting human faces within its range, distinguishing individual features, and possibly comparing them against a database for identification purposes. Such a feature is invaluable in scenarios where the robot is expected to interact with or assist different individuals, ensuring personalized interaction or security authorization.

Concurrently, obstacle detection is another critical function of this top-mounted dual-camera system. By constantly scanning the environment, the cameras can identify potential obstructions in the robot’s path. Utilizing advanced computer vision algorithms, the system can analyze the camera feed to differentiate between static objects like walls and moving obstacles like people or animals, allowing the robot to navigate safely and efficiently.

Pathfinding and routing are next in line of the operational spectrum. These cameras provide the necessary visual inputs to calculate the most efficient route from one point to another within the robot's operating space. The robot employs sophisticated algorithms to plot a course that avoids obstacles while optimizing travel time and energy usage.

To support the depth perception and spatial mapping capabilities, a Kinect2 sensor is incorporated into the system. This sensor deploys infrared projections and camera technology to create a 3D map of the robot's environment. By interpreting these infrared patterns, the Kinect2 can gauge distances and construct a detailed volumetric representation of the space. This spatial mapping is crucial for tasks that require knowledge of the location and size of objects and spaces around the robot, such as navigation, object manipulation, or when working in dynamic environments.

Image Processing and Robot Guidance:

Moreover, a 5-megapixel camera is dedicated to face processing. Although lower in resolution compared to more advanced imaging systems, a 5-megapixel camera can still effectively capture the detailed visage necessary for facial analysis. This subset of processing engages in the recognition of facial features and expressions, possibly to facilitate interaction with users by assessing their emotions or to gather demographic data for analytics.

Together, this advanced sensor and camera array equip the robot with a nuanced perception of its environment, akin to a sentient being’s vision and spatial awareness. By combining face recognition, obstacle detection, pathfinding, routing, mapping, and face processing, the robot is well-prepared to interact with humans, navigate diverse landscapes, and undertake complex tasks with a high degree of autonomy.

Expanding upon the capabilities of this advanced robotic platform, the inclusion of other types of sensors, such as ultrasonic units, imbues the robot with even greater awareness and interactive potential. Ultrasonic sensors are indispensable in the context of robotic navigation and spatial assessment. They function by emitting high-frequency sound waves and then listening for the echo as these waves bounce off objects. By calculating the time interval between sending the signal and receiving the echo, the sensors can determine the distance to nearby objects with notable precision.

The strategic placement of ultrasonic sensors around the robot’s chassis ensures a 360-degree detection field, allowing it to perceive its environment in a more holistic manner. This comprehensive coverage is critical for navigating through tight spaces or crowded environments where visual inputs alone might not suffice. For example, while cameras can be obstructed or affected by lighting conditions, ultrasonic sensors provide reliable distance measurements irrespective of visual factors.

Utilizing ultrasonic sensors, the robot can measure not only the distance to objects but also deduce the size and shape of empty space available. This enables the robot to make real-time adjustments to its path, seamlessly avoiding collisions while maneuvering in an efficient and purposeful manner. It also allows the robot to engage in more complex spatial tasks, such as fitting through gaps, aligning itself with specific objects, or operating in completely dark environments where visual sensors would be ineffective.

The fusion of data from cameras, Kinect2, and ultrasonic sensors gives the robot a multidimensional understanding of its surroundings. Sophisticated algorithms likely process the sensory inputs, synchronize them, and interpret the diverse streams of data to create a coherent and actionable environmental model. This sensor synergy enables the robot not just to 'see' its world in a human-like manner—with color, light, and texture via the cameras—but also to 'feel' the proximity of surfaces and objects in a bat-like fashion using echolocation through ultrasonic sensors.

In essence, the sophisticated sensor array enhances the robot's cognitive architecture, allowing it to behave in an informed and intelligent way as it interacts with the world. It can confidently navigate, recognize individuals, avoid obstacles, and understand its spatial context in a deeply nuanced way, functioning safely and effectively in both predictable and unpredictable environments.

Raspberry Pi 4 Model B

The image processing and robot guidance section of this advanced robotic platform utilizes a Raspberry Pi 4 Model B with Windows 11 operating system installed. This powerful microcontroller serves as the brains of the robot, responsible for collecting and analyzing visual data from the sensors and cameras placed around the robot's surroundings. The image processing algorithms are written in C#, a programming language with extensive support for image processing tasks, and Python, a versatile language with a vast selection of libraries for machine learning and computer vision. By leveraging these language capabilities, the robot can efficiently process and interpret visual data, allowing it to make informed decisions and navigate its environment with precision.

In Windows 11, the Raspberry Pi 4 Model B can tap into a wealth of image processing libraries and frameworks, such as OpenCV, TensorFlow, and PyTorch. These tools provide a range of pre-built functions and classes for tasks likeobject detection, facial recognition, and image segmentation, streamlining the image processing workflow and enabling the robot to focus on higher-level tasks like motion planning and control. The Raspberry Pi 4 Model B's processing power and memory, along with the advanced algorithms and libraries, allow the robot to efficiently handle complex image processing tasks. For example, it can quickly detect and track objects, recognize faces, and identify patterns in the environment, enabling the robot to make informed decisions and interact with its surroundings in a intelligent manner.

In summary, the combination of the Raspberry Pi 4 Model B, Windows 11, and advanced image processing algorithms in C# and Python enable the robot to precisely process and understand visual data from the sensors and cameras, allowing it to navigate its environment with autonomy and confidence.

About

IRSA 2024

There are various roles that collectively cover a wide range of activities. Below are some of the roles and required skills:

Management:

Responsibilities:

Determining the general strategies of the company, Monitoring the performance of the executive team, and Communicating with other members of the company.

Expertise:

General management, strategy formulation, interpersonal communication, Strategic leadership, Strategic decision-making.

Members:

✔ Ebrahim Dashty Consultant, Co-Founder
✔ Mojtaba Farzaneh Leader, Co-Founder
✔ Hamid Khezri Advisor

Robotics Engineers:

Skills:

Electrical, mechanical, software, and control engineering.

Tasks:

Design, development, and optimization of robots and control systems.

Members:

✔ Mostafa Nori Electronic Boards
✔ Younes Farzaneh Robot Design
✔ Hasan Alijan Pour ARM and Moving
✔ Majid Rasti Sani Body and Concept Design
✔ Masome Fatahi Robot Control
✔ Ali Hayati Electronic Boards
✔ Ali Saebi Rad Body and Moving
✔ Mohammad Sadegh Chakeri Mapping and Navigation

Programmers:

Skills:

Robotics software programming, image processing, machine learning, and control.

Tasks:

Writing code for robotic software, image processing, machine learning, Artificial Intelligence and control systems.

Members:

✔ Mojtaba Farzaneh Senior Programmer
✔ Neda Mazrouei Image Processing
✔ Arezoo Esmaeili Artificial Intelligence
✔ Leila Salimi Software Design

Mechanical Engineers:

Skills:

Mechanical engineering and fabrication.

Tasks:

Designing and building robotic hardware, conducting physical tests.

Members:

✔ Ali Saleh Mechanical Parts
✔ Hasan Alijan pour ARM and Moving

Software Designers:

Skills:

User interface design, software application development, and database systems.

Tasks:

Developing control software, user applications, and user interface design.

Members:

✔ Malihe Dashty Designer and Illustrator
✔ Atefeh Heidaryan Software Design
✔ Neda Mazrouei Image Processing

Project Manager:

Skills:

Project Management, interpersonal communication, time management.

Tasks:

Project Planning, team management, and customer communication.

Members:

✔Ebrahim Dashty Strategy and Business Manager
✔ Mojtaba Farzaneh Team Leader

Marketing Specialists:

Skills:

Digital marketing, online advertising, market analysis.

Tasks:

Advertising and marketing robotic products, market analysis, understanding customer needs.

Members:

✔Ebrahim Dashty Market Analysis
✔ Atefeh Heidaryan Digital Marketing

Legal and Financial Management:

Skills:

E-commerce law, financial management, accounting.

Tasks:

Resolving legal issues, financial and economic management of the team.

Members:

✔Ebrahim Dashty Legal and Financial Management

Cybersecurity Experts:

Skills:

Cybersecurity and data protection.

Tasks:

Enhancing system security and preventing cyber-attacks.

Members:

✔ Maryam Yousefnezhad Deep learning
✔ Noushin Eftekhari Deep learning
✔ Ebrahim Dashty Data Network Security

In general, the composition of a robotics team should include individuals with diverse skills and expertise to ensure that the team operates optimally and considers all aspects of development and productivity.

Team Members

Names Study Field Degree Role Location
Ali Hayati Electronic Engineering Master Electronic Boards Iran
Ali Saebi Rad Electronic Engineering Bachelor Body and Moving Iran
Ali Saleh Mechanic Engineering Master Mechanical Parts Iran
Arezoo Esmaeili Artificial Intelligence Master Artificial Intelligence Iran
Atefeh Heidaryan Computer Engineering Bachelor Software Design Iran
Ebrahim Dashty IT Management PhD Consultant, Co-Founder Iran
Hasan Alijan Pour Electronic Engineering Master ARM and Moving Iran
Hamid Khezri Electronic Engineering PhD Advisor South Africa
Leila Salimi Computer Engineering Master Software Design Iran
Majid Rasti Sani Architecture Master Body Design Iran
Malihe Dashty Graphic Design Master Designer Iran
Maryam Yousefnezhad Cyber security PhD Deep learning Canada
Masome Fatahi Electronic Engineering PhD Robot Control USA
Mohammad Sadegh Chakeri Electronic Engineering Master Maping and Navigation Iran
Maryam Yousefnezhad Cyber security PhD Deep learning Canada
Mojtaba Farzaneh Electronic Engineering Master Leader, Co-Founder Iran
Mostafa Nori Electronic Engineering Master Electronic Boards Iran
Neda Mazrouei Computer Engineering Bachelor Image Processing Iran
Noushin Eftekhari Cyber security PhD Deep learning UK
Younes Farzaneh Civil Engineering Master Robot Design Iran

About Team

We are an R&D group that values sharing our knowledge and skills. Working cohesively as a team in this field, we have undertaken specialized activities, achieving top ranks in Robotic, Image Processing, Artificial Intelligence, and more. With extensive executive and scientific experiences, we have successfully patented over 5 inventions.

A significant portion of our work revolves around Electronic and Robotic systems. Our team has been actively engaged in this field for over 15 years, showcasing specialized expertise, particularly excelling in Robotic and Artificial Intelligence experiences.

Our activities encompass the fields of Electronics and Computers. In Electronics, our focus revolves around digital and analog board design. In the realm of Computers, we specialize in Software Design, Image Processing, Neural Networks, and Artificial Intelligence. Our capabilities extend to handling any Electronic or Computer project.

Concept Design

Version 1 - IRSA 2023

Version 2 - IRSA 2024

Version 3 - IRSA 2024

Elements

Text

This is bold and this is strong. This is italic and this is emphasized. This is superscript text and this is subscript text. This is underlined and this is code: for (;;) { ... }. Finally, this is a link.


Heading Level 2

Heading Level 3

Heading Level 4

Heading Level 5
Heading Level 6

Blockquote

Fringilla nisl. Donec accumsan interdum nisi, quis tincidunt felis sagittis eget tempus euismod. Vestibulum ante ipsum primis in faucibus vestibulum. Blandit adipiscing eu felis iaculis volutpat ac adipiscing accumsan faucibus. Vestibulum ante ipsum primis in faucibus lorem ipsum dolor sit amet nullam adipiscing eu felis.

Preformatted

i = 0;

while (!deck.isInOrder()) {
    print 'Iteration ' + i;
    deck.shuffle();
    i++;
}

print 'It took ' + i + ' iterations to sort the deck.';

Lists

Unordered

  • Dolor pulvinar etiam.
  • Sagittis adipiscing.
  • Felis enim feugiat.

Alternate

  • Dolor pulvinar etiam.
  • Sagittis adipiscing.
  • Felis enim feugiat.

Ordered

  1. Dolor pulvinar etiam.
  2. Etiam vel felis viverra.
  3. Felis enim feugiat.
  4. Dolor pulvinar etiam.
  5. Etiam vel felis lorem.
  6. Felis enim et feugiat.

Icons

Actions

Table

Default

Name Description Price
Item One Ante turpis integer aliquet porttitor. 29.99
Item Two Vis ac commodo adipiscing arcu aliquet. 19.99
Item Three Morbi faucibus arcu accumsan lorem. 29.99
Item Four Vitae integer tempus condimentum. 19.99
Item Five Ante turpis integer aliquet porttitor. 29.99
100.00

Alternate

Name Description Price
Item One Ante turpis integer aliquet porttitor. 29.99
Item Two Vis ac commodo adipiscing arcu aliquet. 19.99
Item Three Morbi faucibus arcu accumsan lorem. 29.99
Item Four Vitae integer tempus condimentum. 19.99
Item Five Ante turpis integer aliquet porttitor. 29.99
100.00

Buttons

  • Disabled
  • Disabled

Form