Call for Interns
OMRON SINIC X (OSX) is looking for research interns throughout the year to work with our members in challenging research projects on a variety of topics related to robotics, machine learning, computer vision, and HCI. Many students have participated in our internship program, and their achievements have been published as academic papers at international conferences such as CVPR, ICML, IJCAI, ICRA, CoRL, or as OSS libraries. For more information about our activities at OSX, please visit Medium and GitHub
Keyword Search
Select keyword(s) to narrow down available projects
No results
Research on integration of physical or algorithmic principles into machine learning
In this research project, we aim to develop novel models and methods that seamlessly integrate machine learning with classical problems窶敗uch as physics simulations and mathematical optimization窶杯hat are grounded in physical and algorithmic principles. Based on the research outcomes, we will prepare and submit a paper targeting top-tier international conferences in the field of machine learning, such as NeurIPS, ICML, and ICLR.
-
- Required skills and experience
-
- Experience with deep learning, either through research or reproducing existing methods
- Either 1) good understanding about geometric deep learning or 2) a Bachelor's degree or higher in physics or mathematics
- Python development skills
-
- Preferred skills and experience
-
- Knowledge of geometric mathematics, such as spherical harmonics and rotational equivariance
- Background in physics or mathematics (e.g., Bachelor's degree in physics or math)
- Research and/or developement experience in geometric deep learning
- Skills in implementing custom forward/backward functions and custom GPU kernels (e.g., with PyTorch)
- Proficiency in C++.
- Experience using GitHub/GitLab and Docker
- Machine learning
- Algorithm
- Geometric deep learning
- Physics
Large-Scale Language Models Using Robotic Manipulation Data
We will develop a method to refine large language models using robotic manipulation data, such as simulation-generated training data and teleoperation demonstration data.
-
- Required skills and experience
-
- Knowledge and experience in language-model research
-
- Preferred skills and experience
-
- Knowledge and experience in robot motor-skill learning with physics simulation
- Ability to program in Python using PyTorch
- Proficiency with tools such as GitHub and Docker
- Machine learning
- Robotics
- Computer vision
- Natural language processing
- Physical simulation
Soft robotic foundation model
We are developing methods that enable robots to quickly adapt to various tasks by leveraging emerging robot foundation models and soft robotics technologies. Accepted interns will be involved in activities such as surveying robot foundation models, integrating pretrained models, exploring adaptation methods for novel environments, conducting experiments, and writing research papers.
The project also includes robust robot manipulation strategies using language, such as language-based error recovery. Our goal is to submit papers to top-tier international conferences and journals in robotics and machine learning (e.g., CoRL, RSS, RA-L, T-RO, IJRR, ICRA, IROS).
Interns will work closely with their mentor, meeting at least once per week to track progress, plan for paper submissions, and coordinate writing tasks to maximize the chances of successful publication.
While the following topics are particularly relevant, the scope is not limited to them; project themes will be flexibly tailored to the intern窶冱 expertise. We especially welcome interns with experience using or developing large language models (LLMs) and a strong interest in their application to robotics. This project is expected to be conducted on-site in Tokyo.
- Soft robotic manipulation
- Robot foundation models
- Language-guided robot manipulation
-
- Required skills and experience
-
- Experience in Python or C++
At least one of the following: - Research and development experiences in robot learning, sensing, control theory, or motion planning
- Research and development experiences using machine learning or large language models
- Experience in Python or C++
-
- Preferred skills and experience
-
- Research and development experience in robot foundation models
- Research and development experience in ROS
- Experience in submitting papers of the field of robotics and artificial intelligence
- Experience in participation in robot competitions
- Machine learning
- Robotics
- Computer vision
- Signal processing
- Natural language processing
- Algorithm
- Development
- Soft robotics
- Robot foundation model
Investigating and Applying the Cognitive Mechanisms of Robot Embodiment
This project explores the cognitive mechanisms by which individuals perceive robotic systems as extensions of their own bodies (embodiment). By combining psychophysical experiments and machine learning, we aim to identify the key factors that contribute to embodiment and examine their potential applications. The project also includes submitting research findings to top-tier HCI conferences (e.g., CHI, UIST) and high-impact journals (e.g., Science, Nature). This project requires on-site work in Tokyo.
Related project: Swarm Body (CHI'24) https://medium.com/sinicx/-69bc10abfd64
-
- Required skills and experience
-
- Knowledge and experience in deep learning
- Understanding of embodiment in psychology (e.g., sense of agency, sense of body ownership)
- Skills and experience in developing VR systems using Unity
-
- Preferred skills and experience
-
- Experience presenting research papers in fields such as HCI, VR, CV, or ML
- Experience designing and conducting user experiments
- Basic knowledge of statistical analysis
- Machine learning
- Robotics
- Computer vision
- Interaction
- Embodied cognition
- Sense of agency
- World model
Robot Learning using large-scale skill database
We are seeking interns to work on novel methods of imitation learning and offline reinforcement learning that leverage past unlabeled demonstrations, enabling even small models to achieve high performance. Our focus includes developing databases that efficiently accumulate and retrieve skills from vast experience datasets, and methods for combining past skills to adapt to new tasks. The research outcomes aim to be published in international conferences and released as a widely impactful library.
-
- Required skills and experience
-
- Research Experiences using PyTorch
- Research and development experiences in machine learning
-
- Preferred skills and experience
-
- Publication recors in the field of robotics(CoRL,ICRA,IROS),machine learning (ICML,NeurIPS,ICLR) and relevant fields (SGIR)
- Machine learning
- Robotics
- Computer vision
Question Generation from Videos
In this project, we aim to understand what makes skilled interviewers able to ask good questions. We will conduct research on technologies that can generate questions from videos. The results of our research will be submitted as papers to international conferences.
-
- Required skills and experience
-
- Experience in research and development of action analysis from videos
- Basic knowledge of Vision-Language Models and VideoLM
-
- Preferred skills and experience
-
- Experience with having papers accepted at international conferences
- Research and development experience using RAG
- Computer vision
- Natural language processing
Robot software or hardware engineering
We invite accepted interns to contribute to ongoing research projects in robot software or hardware engineering. This may include programming to launch robotic systems, designing tools and environments for robots, and performing hardware maintenance tasks.
We also encourage interns to publish the outcomes of their engineering work as original or systems research papers, or to release open-source software libraries as part of their achievements during the internship.
-
- Required skills and experience
-
At least one of the following:
- Research or development experience in ROS
- Research or development experience in robot hardware design
- Research or development experience in sesing design
-
- Preferred skills and experience
-
- Experience in participation in robot competitions
- Robotics
- Development
- Manipulation
- Mechanics design
- OSS development
Learning soft robotic tool manipulation with tactile sensors
The goal of this project is to enable robots equipped with physically soft bodies and tactile sensing to learn complex tool manipulation tasks, thereby elucidating the synergies among soft robots, tactile sensing, and motor learning.
To this end, developing a tactile-based learning framework, employing data-efficient learning approaches, and exploring strategies for learning from sub-optimal datasets will be essential.
We would like the accepted interns to develop the robot learning algorithm with tactile sensor fusion, implement robot software, perform experiments, and write papers.
We aim to submit top-tier robotics or AI conference or journal papers such as (ICRA, IROS, CoRL, RA-L, T-RO, and NeurIPS). The mentors will meet with the interns at least once a week to discuss the research progress, plans for paper submissions, and distribution of writing tasks to ensure a more reliable submission.
The following themes are relevant but not limited. Themes will be flexibly determined based on the intern's expertise. The project actively seeks interns with experience in machine learning, reinforcement learning development, and a strong interest in robotics applications. The project assumes on-site work in Tokyo.
- Soft robotic manipulation learning
- Tactile-based manipulation using vision-based or distributed tactile sensors
- Offline and online reinforcement learning, and Imitation learning
- Language-guided manipulation
Accepted papers that interns contributed as first author advised by this mentor: IROS2024, ICRA2024 (2 papers), IROS 2023 (2 papers), NeurIPS 2023, IEEE ACCESS, IEEE CASE 2021, and CoRL 2020.
Related projects: learning robotics assembly using soft wrist and tactile sensors. https://omron-sinicx.github.io/saguri-bot-page/, https://omron-sinicx.github.io/symmetry-aware-pomdp/, https://omron-sinicx.github.io/soft-robot-sim-to-real/
-
- Required skills and experience
-
- Experience in Python or C++
At least one of the following: - Research and development experiences in robot learning, sensing, control theory, or motion planning
- Research and development experiences using machine learning or reinforcement learning
- Experience in Python or C++
-
- Preferred skills and experience
-
- Research and development experience in ROS
- Experience in submitting papers of the field of robotics and artificial intelligence
- Experience in participation in robot competitions
- Machine learning
- Robotics
- Interaction
- Signal processing
- Algorithm
- Development
- Soft robotics
- Tactile sensing
Research on Vision-Language Models Sensitive to State Changes in Videos
This research focuses on developing Vision-Language Models capable of detailed understanding of object state changes and event transitions within videos.
-
- Required skills and experience
-
- Expert knowledge of Vision-Language Models
- In-depth knowledge of video analysis or experience in developing large-scale datasets
-
- Preferred skills and experience
-
- Research and development experience on the compositionality problem in VLMs
- Research and development experience in object tracking and state change description in videos
- Machine learning
- Computer vision
- Natural language processing
Development of an Explainable Language-Guided Planning Framework
In this project, we will develop an automatic generation framework for creating "reliable" task plans based on language instructions. We will work on task planning technology that integrates path planning using LLMs, RAG, and optimization techniques, as well as the development of agents that can perform tasks within a PC environment.
-
- Required skills and experience
-
- Basic knowledge of LLMs and RAG
- Extensive knowledge of optimization, including pathfinding
- Programming experience using LLMs
-
- Preferred skills and experience
-
- Experience in building RAG
- Experience in using and developing task planners such as PDDL
- Product development experience
- Natural language processing
- Algorithm
- State Space Generation
3D-printed one-piece force transmission mechanism design
We will develop a design method for a 3D-printed one-piece force transmission mechanism to be used for lightweight manipulators that are actuated from the root, such as the manipulator we developed in the past (https://omron-sinicx.github.io/twistsnake/).
-
- Required skills and experience
-
- Experience in robotic mechanism design and 3D printers
-
- Preferred skills and experience
-
- Robot competition
- Experience in team-based development
- Experience of receiving an award from an academic society or a scholarship
- Experience of coding in ROS, Python, or C++ in the development of a robot
- Robotics
- Mechanics design
Learning of object manipulation skills generalized to objects and tasks
We will develop a method for learning of object manipulation skills generalized to objects and tasks.
-
- Required skills and experience
-
- Publication record in robot learning (ICRA, IROS, CoRL, ICML, NeurIPS, ICLR, etc.)
- Experience in manipulation learning
-
- Preferred skills and experience
-
- Management of projects and code using git/GitLab/GitHub
- Experience in team-based development
- Machine learning
- Robotics
- Manipulation
- Imitation learning
Learning Manipulation with Segmentation Using Large-Scale Language Models
We will develop a method for learning manipulation with segmentation using Large-Scale Language Models
-
- Required skills and experience
-
- Publication record in robot learning (ICRA, IROS, CoRL, ICML, NeurIPS, ICLR, etc.)
- Experience in learning
-
- Preferred skills and experience
-
- Management of projects and code using git/GitLab/GitHub
- Experience in team-based development
- Machine learning
- Robotics
- Manipulation
- Large language model
Research on robust image recognition model and multi-modal language model
Image recognition models are noted to be vulnerable to domain shifts caused by environmental changes. This project aims to construct robust image recognition models resilient to environmental changes. We plan to select research topics on robust image recognition models from various fields, including but not limited to generalization of image classification models and applications such as Vision-Language models, and aim to submit them to international conferences.
-
- Required skills and experience
-
- Experience in Python development
- Experience in developing machine learning model for image recognition
-
- Preferred skills and experience
-
- Knowledge of image recognition model
- Knowledge of transfer learning and multi-modal model
- Experience writing papers in related fields
- Machine learning
- Computer vision
- Natural language processing
- Domain Generalization
- Image recognition
- Multi-modal model
AI for Science
You will work on AI research that accelerates and automates research and development itself. You will participate in partial projects in the realization of AI scientists who can formulate research claims, run experiments, analyze the results, and write papers in an interactive co-evolution with human researchers.
-
- Required skills and experience
-
- Python, Github, Docker
- Knowledge and experience in deep learning
-
- Preferred skills and experience
-
- Knowledge and experience in natural language processing, computer vision, and data science
- Mathematical knowledge and formulation ability in machine learning and deep learning
- Interaction
- Algorithm
- Development
Research on Practical LLMs in the Medical Field
This research explores the practical application of large language models (LLMs) in the medical field, focusing on solving real-world challenges in diagnostic support, treatment planning, and patient communication. Key approaches include designing multimodal models that integrate diverse data for intuitive use, fine-tuning LLMs to enhance medical expertise, and connecting external knowledge bases for up-to-date medical information access. Furthermore, to support timely decision-making in information-rich medical environments, we also consider a multi-agent LLM system in which specialized agents collaborate to extract and present relevant information. By addressing these aspects, the study aims to reduce the workload of healthcare professionals, improve clinical efficiency, and develop foundational technologies that can be effectively utilized in real medical settings.
-
- Required skills and experience
-
- Publication record in related fields
- Knowledge and research/development experience with LLMs
-
- Preferred skills and experience
-
- Experience related to the medical field
- Machine learning
- Computer vision
- Interaction
- Natural language processing
- LLM
- Medical
Research on 3D vision including visual SLAM and NeRF
In this research project, we aim to develop novel models and methods for image-based 3D sensing technologies, such as Visual SLAM and NeRF. Based on the research outcomes, we will prepare and submit a paper targeting top-tier international conferences in the field of computer vision, such as CVPR, ICCV, and ECCV.
-
- Required skills and experience
-
- Experience with deep learning, either through research or reproducing existing methods
- Good mathematical understanding about 3D geometries (e.g., perspective projection, rotation, translation, etc.)
- Python development skills
-
- Preferred skills and experience
-
- Knowledge and experience in 3D deep learning or classical VSLAM
- Knowledge and experience in munerical optimization
- Skills in implementing custom forward/backward functions and custom GPU kernels (e.g., with PyTorch)
- Proficiency in C++
- Experience using GitHub/GitLab and Docker
- Machine learning
- Computer vision
- 3D vision
- Optimization
Study on multi-agent path planning algorithms from the topological viewpoint
We will study approaches to multi-agent path planning from the topological viewpoint. Accepted interns are expected to work in collaboration with the mentors to submit research results to top international conferences in the field of artificial intelligence and machine learning.
-
- Required skills and experience
-
- Basic knowledge in topological geometry and computational geometry
-
- Preferred skills and experience
-
- Research and development experiences on path planning, especially multi-agent path planning
- Publication record in the field of artificial intelligence (e.g., AAAI, IJCAI, AAMAS, ICML, NeurIPS, ICLR)
- Expert knowledge in topological geometry and computational geometry
- Algorithm
- Path planning
- Multi-agent system
Research on application of machine learning to physics simulation method and results
We will conduct research and development on the application of machine learning to the algorithms or the data analysis of physics simulations, e.g. DFT/MD/Tensor Network, and write papers to journals e.g. Nature/Science, Physical Review, or conferences e.g. SC/ICML.
-
- Required skills and experience
-
- Knowledge and experience in physics simulation e.g. DFT/MD/Tensor Network
-
- Preferred skills and experience
-
- Knowledge and experience in machine learning
- Pytorch, Python
- Github, Docker
- Interaction
- Algorithm
- Development
- Physics simulation
Research on application of machine learning methods to physics simulation
We will conduct research and development on the application of your machine learning methods to physics simulations and write papers in related fields (Nature/Science, Physical Review, SC/HPCG, etc.).
-
- Required skills and experience
-
- Knowledge and experience in machine learning
-
- Preferred skills and experience
-
- Knowledge and experience in physics simulation e.g. DFT/MD/Tensor Network
- Pytorch, Python
- Github, Docker
- Interaction
- Algorithm
- Development
- Physics simulation
Conditions
Term: | From 3-month duration (assuming 5 working days a week). Start and end dates can be adjusted. Some projects accept short-term interns from 1-month duration. |
---|---|
Hours: | Full-time or part-time (e.g. 3 days a week, etc. negotiable). 45-minute breaks. Holidays and weekends off. |
Location: | On-site, hybrid, or remote options are available. Hybrid and remote options are only available if you live in Japan; due to legal issues, we cannot pay salaries to remote interns who live outside of Japan. If you join our internship program, you must come to Japan. In such cases, we offer support for travel expenses. Some internship projects may require on-site work. In this case, you will be assigned to one of our offices in Hongo or Shinagawa. |
Salary: | Full-time monthly salary ranges from 240,000 JPY to 480,000 JPY. Hourly rate is applied for part-time work. Social security and other benefits are provided according to the working conditions. Transportation and housing expenses are fully covered. In addition, other expenses necessary for research activities (PC, laptop, etc.) are fully supported. |
Language: | Japanese or English (English-only communication is also fine.) |
Others: | Two or more mentors with extensive research experience will provide in-depth support for each project. Computational resources (workstations, and server clouds with GPUs) and robotic facilities (robotic arms, various sensors, 3D printers, motion capture systems, and other prototyping and experimental equipment) are available. |
How to apply
Please fill
the application form. We will first screen each application based on those
information.
For other inquires, please contact
internships@sinicx.com. We will first screen each application based on those information.
Those who pass the above screening will be interviewed remotely. Please prepare slides or other materials to introduce past research and development activities and achievements.
Please contact us at least three months in advance if you need a visa to enter Japan.