Call for Interns
OMRON SINIC X (OSX) is looking for research interns throughout the year to work with our members in challenging research projects on a variety of topics related to robotics, machine learning, computer vision, and HCI. Many students have participated in our internship program, and their achievements have been published as academic papers at international conferences such as CVPR, ICML, IJCAI, ICRA, CoRL, or as OSS libraries. For more information about our activities at OSX, please visit Medium and GitHub
Conditions How to applyKeyword Search
Select keyword(s) to narrow down available projects
No results
Prediction of dementia from multimodal data based on gait and eye gaze
In this study, we will conduct research on predicting dementia from gait and gaze. we will study machine learning models to predict dementia from multimodal time series data of gait and gaze.
This research is joint research with Suzuki Laboratory, Ichinoseki National College of Technology.
-
- Required skills and experience
-
- Python, Github, Docker
- Knowledge and experience in deep learning
-
- Preferred skills and experience
-
- Knowledge and experience in natural language processing, computer vision, and data science
- Mathematical knowledge and formulation ability in machine learning and deep learning
- Machine learning
- Interaction
- Signal processing
Ophthalmology Specialty AI
In this study, we will develop an ophthalmologist AI with the ability to pass the ophthalmologist exam using LLM and verify its accuracy from various directions.
Also, research and develop a system that presents and summarizes the evidence for answers from multiple sources.
-
- Required skills and experience
-
- Python, Github, Docker
- Knowledge and experience in deep learning
-
- Preferred skills and experience
-
- Knowledge and experience in natural language processing, computer vision, and data science
- Mathematical knowledge and formulation ability in machine learning and deep learning
- Machine learning
- Interaction
- Natural language processing
- LLM
- Explainable AI:XAI
The fastest multimodal JetRacer
This development will create the fastest JetRacer in the world. In cooperation with FaBo, we will incorporate the latest research results
from recent years into JetRacer.
The University of Aizu and others are envisioned as racing venues.
-
- Required skills and experience
-
- none
-
- Preferred skills and experience
-
- none
- Machine learning
- Interaction
- Development
- JetRacer
Development of the back-projected robot head
To develop avatars with emotions for elderly caregivers. Specifically, we will design a rear-projection type robot head using a microprojector and a fisheye lens, a 2-DOF neck for moving the head, and software to control it.
-
- Required skills and experience
-
- none
-
- Preferred skills and experience
-
- none
- Robotics
- Interaction
- Development
Learning manipulation of a soft arm
We will develop a method for learning manipulation leveraging the physical softness of a soft arm.
-
- Required skills and experience
-
- Research experience using deep learning
- Research experience in robot learning
-
- Preferred skills and experience
-
- Publication record in human-computer interactions (ICRA, IROS, CoRL, ICML, NeurIPS, ICLR, etc.)
- Management of projects and code using git/GitLab/GitHub
- Experience of coding in ROS, Python, or C++ in the development of a robot
- Experience in team-based development
- Knowledge related to robotics (e.g., control, optimization, HCI, computer vision, etc.)
- Machine learning
- Robotics
Development of an energy system for sustainable robots
We will develop an enegy system for sastainable robots.
-
- Required skills and experience
-
- Robot development experience
-
- Preferred skills and experience
-
- Publication record in human-computer interactions (ICRA, IROS, etc.)
- Management of projects and code using git/GitLab/GitHub
- Experience of coding in ROS, Python, or C++ in the development of a robot
- Experience in team-based development
- Knowledge related to robotics (e.g., control, optimization, HCI, computer vision, etc.)
- Robotics
GPU-accelerated Search Algorithm at Scale
We are seeking interns to work on the researh of a new search algorithm using optimal transport for a large-scale 3D map or fashion coordination database. In this project, we will focus on designing and implementing scalable algorithms using GPGPU to handle the large-scale data. Our goal is to publish research papers at international conferences and release a widely used library based on our research results.
-
- Required skills and experience
-
- Research Experiences using PyTorch or Tensorflow
- Programming Experiences using GPUs
- Programming Experiences with C++/CUDA
- Fundamentals of Optimal Transport and Search Algorithms
- Research and development experiences in machine learning
-
- Preferred skills and experience
-
- Advanced expertise in CUDA programming
- Programming experiences in Cython/pybind/nanobind
- Advanced expertise of performance analysis using GPU/CPU profilers
- Publication recors in the field of machine learning (ICML,NeurIPS,ICLR) and relevant fields (CVPR,ICCV,ECCV,SGIR).
- Machine learning
- Development
- Optimal transport
- Search algorithm
- CUDA
Research on 3D vision including visual SLAM and NeRF
In this research project, we will pursue new models and optimization techniques for image-based 3D sensing technologies such as Visual SLAM and NeRF. We aim to publish papers at top international conferences in the computer vision field, such as CVPR, ICCV, and ECCV.
-
- Required skills and experience
-
- Research and/or development experience in deep learning using PyTorch, etc.
- Good mathematical understanding about 3D geometries
- Python
-
- Preferred skills and experience
-
- Knowledge and experience in 3D deep learning or classical VSLAM
- Knowledge and experience in munerical optimization
- Implementation skills of custom forward&backward functions and GPU kernels in PyTorch, etc.
- Knowledge and experience with GitHub/GitLab and Docker
- Machine learning
- Computer vision
- Algorithm
- 3D vision
- Optimization
Research on application of machine learning to physics simulation method and results.
We will conduct research and development on the application of machine learning to the algorithms or the data analysis of physics simulations, e.g. DFT/MD/Tensor Network, and write papers to journals e.g. Nature/Science, Physical Review, or conferences e.g. SC/ICML.
-
- Required skills and experience
-
- Knowledge and experience in physics simulation e.g. DFT/MD/Tensor Network
-
- Preferred skills and experience
-
- Knowledge and experience in machine learning
- Pytorch, Python
- Github, Docker
- Machine learning
- Physics simulation
Research on application of machine learning methods to physics simulation
We will conduct research and development on the application of your machine learning methods to physics simulations and write papers in related fields (Nature/Science, Physical Review, SC/HPCG, etc.).
-
- Required skills and experience
-
- Knowledge and experience in machine learning
-
- Preferred skills and experience
-
- Knowledge and experience in physics simulation e.g. DFT/MD/Tensor Network
- Pytorch, Python
- Github, Docker
- Machine learning
- Physics simulation
Study on multi-agent path planning algorithms from the topological viewpoint
We will study approaches to multi-agent path planning from the topological viewpoint. Accepted interns are expected to work in collaboration with the mentors to submit research results to top international conferences in the field of artificial intelligence and machine learning.
-
- Required skills and experience
-
- Basic knowledge in topological geometry and computational geometry
-
- Preferred skills and experience
-
- Research and development experiences on path planning, especially multi-agent path planning
- Publication record in the field of artificial intelligence (e.g., AAAI, IJCAI, AAMAS, ICML, NeurIPS, ICLR)
- Expert knowledge in topological geometry and computational geometry
- Algorithm
- Path planning
- Multi-agent systems
Design of a soft manipulator that learns
We will design of a soft manipulator that learns manipulation leveraging body softness.
-
- Required skills and experience
-
- Experience in robotic mechanism design or robotic system design
-
- Preferred skills and experience
-
- Robot competition
- Experience in team-based development
- Publication in robotics (IROS, ICRA, etc.)
- Experience of receiving an award from an academic society or a scholarship
- Experience of coding in ROS, Python, or C++ in the development of a robot
- Robotics
- Development
- Mechanics design
- Robotic system development
Unsupervised manipulation primitive learning
We will develop a method for acquiring general robotic manipulation skills without reward function or demonstration.
-
- Required skills and experience
-
- Research experience using deep learning
- Research experience in robot learning
-
- Preferred skills and experience
-
- Publication record in human-computer interactions (ICRA, IROS, CoRL, ICML, NeurIPS, ICLR, etc.)
- Management of projects and code using git/GitLab/GitHub
- Experience of coding in ROS, Python, or C++ in the development of a robot
- Experience in team-based development
- Knowledge related to robotics (e.g., control, optimization, HCI, computer vision, etc.)
- Machine learning
- Robotics
- Manipulation
- Skill discovery
- Unsupervised learning
Research on machine learning for few training data
In modalities and domains where are pre-trained models so-called foundation models, target tasks can be achieved by finetuning on a small number of data. On the other hand, more advanced machine learning techniques are required for tasks not in such modalities or domains. In this project, we will conduct research and development on such small-data machine learning and write a paper aiming at top international conferences in machine learning and computer vision.
-
- Required skills and experience
-
- Python, Github, Docker
- Knowledge and experience in deep learning
-
- Preferred skills and experience
-
- Knowledge and experience in computer vision
- Machine learning
- Transfer learning
- Domain adaptation
Effect analysis of real-time information visualization on web meeting
For a multi-person web meeting held at Zoom etc., we will design and carry out experiments using tools that visualize real-time information (amount of conversation, facial expressions, etc.) related to communication. In addition, the experiment will verify the effect of facilitating exchanges and changes between conference participants. The research results will be submitted to international conferences in the field of interaction such as CHI.
-
- Required skills and experience
-
- Interested in human-computer interaction
-
- Preferred skills and experience
-
- Experience in conducting user-participatory experiments such as dialogue analysis
- Experience in paper submission to CHI
- Interaction
- Dialogue analysis
- HCI
- HRI
- HAI
Machine learning applications to multimodal biological data
The approach for biological data processing is shifting from inference based on the understanding of either time-series data or medical images in isolation to the fusion of multimodal data with different attributes and structures. In this project, we will conduct research and development on the application of machine learning to such multimodal biological data and write a paper for an international conference in the related field.
-
- Required skills and experience
-
- Python, Github, Docker
- Knowledge and experience in deep learning
-
- Preferred skills and experience
-
- Knowledge and experience in XAI
- Knowledge and experience in computer vision
- Knowledge and experience in tabular data machine learning
- Signal processing
- Multimodal understanding
Enhancing remote communication via physical avator robots
Removing the restriction in the physical workplace may resolve many social problems. However, remote work is still not a widely-accepted working style even under the current social situation. This project tackles this problem through robot-mediated human-to-human communication. Interns are expected to work with the mentors to submit a paper for an international conference in HCI/HRI.
-
- Required skills and experience
-
- Experience in user studies for dialog analysis
- Experience in participating any full-remote projects
-
- Preferred skills and experience
-
- Experience in JavaScript / Python Flask implementation
- Interaction
- Human sensing
- HCI
- HRI
- HAI
Research on law discovery from observed data
Research on causal analysis on time-series data and explanatory AI is progressing to make some predictions while clarifying the laws between data. For example, symbolic regression for science discovery is one of the research topic. In this project, we will conduct research and development on methods from a new angle for discovering such laws and write papers aiming for acceptance in top international conferences in machine learning or journals. The internship is mainly aimed at Ph.D. students and is expected to last at least three months.
-
- Required skills and experience
-
- Python, Github, Docker
-
- Preferred skills and experience
-
- Knowledge and experience in natural language processing
- Knowledge and experience in signal processing
- Machine learning
- Data mining
A mechanism and system of a high-speed soft robot
We will develop a mechanism or system of a high-speed robot. In particular, we will develop a computational design of mechanisms, such as cable-driven systems, linkage systems, and cam mechanisms and additive manufacturing, such as 3D printing. We will also develop a robotic system that quickly responds to an object and moves fast, such as state estimation and prediction of a fast-moving object and motion planning in a short time.
-
- Required skills and experience
-
- Experience in robotic mechanism design or robotic system design
-
- Preferred skills and experience
-
- Robot competition
- Experience in team-based development
- Publication in robotics (IROS, ICRA, etc.)
- Experience of receiving an award from an academic society or a scholarship
- Experience of coding in ROS, Python, or C++ in the development of a robot
- Robotics
- Development
- Mechanics design
- Robotic system development
The development and research of a feature to estimate emotions from multiple modalities of users for a virtual chatbot.
People determine appropriate actions based on the expressions, voice, and words displayed by others as rewards. This development focuses on the development and research of a feature to estimate emotions from multiple modalities of an avatarized user (facial, vocal, linguistic, and vital features).
・The main task is development, not research. However, if you want, you can research as an extension of this project.
・Contributions to international conference papers in the field of machine learning are also welcome.
-
- Required skills and experience
-
- Experience in Python development
- Experience in machine learning using PyTorch or similar frameworks
- Experience in natural language processing, audio signal processing, or image processing
- Ability to code using tools such as Git, Docker, and VSCode
-
- Preferred skills and experience
-
- Experience in application development using wxPython
- Experience in mobile application development (iPhone, Android)
- Experience in Unity development (C#)
- Experience in prompting using ChatGPT or GPT4
- Machine learning
- Computer vision
- Interaction
- Signal processing
- Natural language processing
- Image processing
- Audio processing
- Biosignal
- Affective computing
Learning long-horizon planning of manipulation
We will develop a method for learning long-horizon planning of manipulation, such as failure recovery, success detection of a sub-task, and operation using natural language.
-
- Required skills and experience
-
- Research experience using deep learning
- Research experience in robot learning
-
- Preferred skills and experience
-
- Publication record in human-computer interactions (ICRA, IROS, CoRL, ICML, NeurIPS, ICLR, etc.)
- Management of projects and code using git/GitLab/GitHub
- Experience of coding in ROS, Python, or C++ in the development of a robot
- Experience in team-based development
- Knowledge related to robotics (e.g., control, optimization, HCI, computer vision, etc.)
- Machine learning
- Robotics
- Manipulation
- Long-horizon planning
Learning dynamic nonprehensile manipulation
We will develop a method for learning dynamic nonprehensile manipulation, such as dynamic re-orientation of a grasped object, hitting an object, and catching a flying object. A robot utilizes information from sensors for learning, such as a camera, a force sensor, and a tactile sensor.
-
- Required skills and experience
-
- Research and development experiences using deep learning
-
- Preferred skills and experience
-
- Knowledge related to robotics (e.g., control, optimization, HCI, computer vision, etc.).
- Publication record in robotics or artificial intelligence (IROS, ICRA, ICML, NeurIPS, ICLR, etc.)
- Experience of receiving an award from an academic society or a scholarship
- Robot competition or a machine learning competition
- Management of projects and code using git/GitLab/GitHub
- Machine learning
- Robotics
- Manipulation
- Nonprehensile manipulation
Validation study using a multimodal avatar dialogue dataset generation system
Our company is researching and developing a multimodal avatar dataset generation system. In this research and development, either of the two research topics will be conducted utilizing this dataset generation system. The first is to validate the Non-multimodal pretraining model using the dataset system, and research to demonstrate the effectiveness of this system. The second is to create a number sense dataset using this system and apply it to number sense research. Interns will conduct either the first or second research theme using this data set generation system.
-
- Required skills and experience
-
- Experience in Python development
- Experience in machine learning using PyTorch or similar frameworks
- Experience in natural language processing, audio signal processing, or image processing
- Ability to code using tools such as Git, Docker, and VSCode."
-
- Preferred skills and experience
-
- Experience in web system using AngularJS
- Experience in web system using Vue or React JS
- Machine learning
- Interaction
- Natural language processing
- Multimodal Avater dialogue
- Number sense
- Web application
- Affective computing
Action generation of a robot to entertain humans in a game
We will develop a method for generating actions of a robot to entertain humans in a game, such as table tennis and air hockey.
-
- Required skills and experience
-
(Any of the following)
- Research experience in HAI, HRI, or HCI
- Research experience using deep learning
- Research experience in robotics
- Robot development
-
- Preferred skills and experience
-
- Publication record in human-computer interactions (HAI, HRI, RO-MAN, CHI, SIGGRAPH, etc.)
- Management of projects and code using git/GitLab/GitHub
- Experience of coding in ROS, Python, or C++ in the development of a robot
- Experience in team-based development
- Knowledge related to robotics (e.g., control, optimization, HCI, computer vision, etc.)
- Robotics
- Interaction
- Development
- HCI
- HRI
- HAI
Video analysis of a ball game using machine learning
We are developing a technology to observe the behavior of humans and estimate their intention and desire. In this project, we will apply machine learning techniques to video analysis of a ball game, such as table tennis, extract the actions of players, and analyze the mechanism of decision making and interactions of multiple players.
-
- Required skills and experience
-
- Research experience using deep learning
-
- Preferred skills and experience
-
- Experience of winning a prize of a machine learning competition such as Kaggle
- Experience of research using Git/Github and Docker
- Experience of receiving an award from an academic society or a scholarship
- Publication in artificial intelligence (CVPR, ICCV, ECCV, NeurIPS, ICML, ICLR, TPAMI, etc.)
- Machine learning
- Computer vision
- Interaction
- Human sensing
- Ball game
The research and development of emotional expression methods to encourage empathy between virtual chatbots and users.
We will conduct research on incorporating gestures, expressions, and behaviors that encourage empathy into virtual chatbots to enable them to engage in compassionate conversations with people, based on the behavior of skilled nurses and caregivers in the field. This project involves various tasks, such as data collection experiments, development, and research, and we will prioritize your interests and goals.
・Contributions to international conference papers in the field of machine learning are also welcome.
-
- Required skills and experience
-
- Experience in Python development
- Experience in machine learning using PyTorch or similar frameworks
- Experience in natural language processing, audio signal processing, or image processing
- Ability to code using tools such as Git, Docker, and VSCode
-
- Preferred skills and experience
-
- Experience in application development using wxPython
- Experience in mobile application development (iPhone, Android)
- Experience in Unity development (C#)
- Experience in prompting using ChatGPT or GPT4
- Machine learning
- Computer vision
- Interaction
- Signal processing
- Natural language processing
- Image processing
- Audio processing
- Biosignal
- Affective computing
The research and development of continuous blood pressure estimation using data assimilation of ultrasound data and fluid simulation
This research aims to invent an inference model that can reproduce blood pressure by assimilating data to estimate fluid dynamics, bringing calculations closer to observations of blood flow measurement using ultrasound and fluid simulation. Contributions to international conference papers in the fields of machine learning and biomedical engineering are also welcome.
-
- Required skills and experience
-
- Experience in PyTorch development
- Experience in Fortran development
- Experience in research and development of fluid simulation using finite volume method
- Knowledge of blood pressure and blood flow
- Knowledge of fluid dynamics
-
- Preferred skills and experience
-
- Knowledge and implementation skills in deep reinforcement learning
- Knowledge and implementation skills in ensemble Kalman filters
- Experience in research and development of continuous blood pressure estimation, blood pressure, and blood flow diagnostic devices would be a plus
- Machine learning
- Signal processing
- Fluid simulation
- Finite volume method
- Biosignal
- Deep reinforcement learning
Learning dual-arm manipulation
We will develop a reinforcement learning method, an imitation learning method, and a larning from demonstraion method for dual-arm manipulation.
-
- Required skills and experience
-
- Research experience using deep learning
- Research experience in robot learning
-
- Preferred skills and experience
-
- Publication record in human-computer interactions (ICRA, IROS, CoRL, ICML, NeurIPS, ICLR, etc.)
- Management of projects and code using git/GitLab/GitHub
- Experience of coding in ROS, Python, or C++ in the development of a robot
- Experience in team-based development
- Knowledge related to robotics (e.g., control, optimization, HCI, computer vision, etc.)
- Machine learning
- Robotics
- Manipulation
- Dual-arm robot
Vision-based manipulation learning
We will develop a method for learning vision-based manipulation, such as transfer learning, generalization, multi-modal learning, cross-modal learning using force, tactile, and autdio information, and natural language.
-
- Required skills and experience
-
- Research experience using deep learning
- Research experience in robot learning
-
- Preferred skills and experience
-
- Publication record in human-computer interactions (ICRA, IROS, CoRL, ICML, NeurIPS, ICLR, etc.)
- Management of projects and code using git/GitLab/GitHub
- Experience of coding in ROS, Python, or C++ in the development of a robot
- Experience in team-based development
- Knowledge related to robotics (e.g., control, optimization, HCI, computer vision, etc.)
- Machine learning
- Robotics
- Computer vision
- Manipulation
- Representation
Research on learnable discrete information processing
Research to modify specific computational modules by making them machine-learnable is underway (e.g., differentiable rendering) to include them in a deep learning pipeline. This project will develop research on making discrete information processing learnable and writing papers for international machine learning conferences, such as ICLR, ICML, and NeurIPS.
-
- Required skills and experience
-
- Python, Github, Docker
- Knowledge and experience in deep learning
-
- Preferred skills and experience
-
- Mathematical knowledge and ability to formulate methods for machine learning and deep learning
- Expertise in convex optimization
- Machine learning
- Algorithm
- Optimization
Representation learning for structured data
Pre-training models based on supervised/self-supervised learning using large amounts of data are widely used in machine learning for natural language and images. The concept of foundation models is gaining ground. In this project, we will conduct research and development on representation learning of data with unique structures other than images and natural language and write a paper aiming at a journal such as Nature and Science families.
-
- Required skills and experience
-
- Python, Github, Docker
-
- Preferred skills and experience
-
- Knowledge and experience in representation learning with images/natural language
- Knowledge and experience in machine learning with point clouds/graphs
- Mathematical knowledge of machine learning and deep learning and ability to formulate equations
- Machine learning
- Representation learning
- Point cloud processing
- Graph processing
Research on fusion understanding of image/video and natural language
While there are tons of research on machine learning for understanding images and natural language, deep learning has led to the commoditization of modules from each other, and research combining multiple modalities is also increasing. In this project, we will develop research on fusion understanding of image/video and natural language and write papers for relevant top international conferences.
-
- Required skills and experience
-
- Python, Github, Docker
- Knowledge and experience in deep learning
-
- Preferred skills and experience
-
- Knowledge and experience in natural language processing
- Knowledge and experience in computer vision
- Computer vision
- Natural language processing
- Multimodal understanding
Multimodal understanding of specialized documents
Understanding technical documents such as papers and patents require understanding data, including structured text and diagrams. It requires efforts that go beyond the framework of conventional natural language processing. In this project, we will conduct research and development on a multimodal understanding of such specialized documents and write papers for international conferences in related fields.
-
- Required skills and experience
-
- Python, Github, Docker
- Knowledge and experience in deep learning
-
- Preferred skills and experience
-
- Knowledge and experience in natural language processing
- Mathematical knowledge and formulation ability in machine learning and deep learning
- Computer vision
- Natural language processing
- Multimodal understanding
Research on human-in-the-loop machine learning
Machine learning research that incorporates humans in machine learning and makes efficient use of feedback from humans is expanding. In this project, we will conduct research and development on human-in-the-loop machine learning research and write papers aiming for publication in top international conferences in machine learning and interaction or journals.
-
- Required skills and experience
-
- Python, Github, Docker
- Knowledge and experience in deep learning
-
- Preferred skills and experience
-
- Knowledge and experience in human-computer interaction
- Machine learning
- Interaction
- HCI
- HRI
- HAI
AI for Science
You will work on AI research that accelerates and automates research and development itself. You will participate in partial projects in the realization of AI scientists who can formulate research claims, run experiments, analyze the results, and write papers in an interactive co-evolution with human researchers.
-
- Required skills and experience
-
- Python, Github, Docker
- Knowledge and experience in deep learning
-
- Preferred skills and experience
-
- Knowledge and experience in natural language processing, computer vision, and data science
- Mathematical knowledge and formulation ability in machine learning and deep learning
- Machine learning
- Interaction
Research on high-dimensional black box optimization
Black-box optimization, such as Bayesian optimization, is subject to computational overheads as the number of parameters to be optimized increases. In this project, we will develop research on high-dimensional black-box optimization and write papers for top international conferences in machine learning or journals.
-
- Required skills and experience
-
- Python, Github, Docker
-
- Preferred skills and experience
-
- Knowledge and experience with black box optimization such as Bayesian optimization
- Algorithm
- Optimization
Vision-based manipulation learning
We will develop a method for learning trustworthy manipulation.
-
- Required skills and experience
-
- Research experience using deep learning
- Research experience in robot learning
-
- Preferred skills and experience
-
- Publication record in human-computer interactions (ICRA, IROS, CoRL, ICML, NeurIPS, ICLR, CHI, HRI, etc.)
- Management of projects and code using git/GitLab/GitHub
- Experience of coding in ROS, Python, or C++ in the development of a robot
- Experience in team-based development
- Knowledge related to robotics (e.g., control, optimization, HCI, computer vision, etc.)
- Machine learning
- Robotics
- Interaction
- Manipulation
- Trustworthy AI
Conditions
Term: | From 3-month duration (assuming 5 working days a week). Start and end dates can be adjusted. Some projects accept short-term interns from 1-month duration. |
---|---|
Hours: | Full-time or part-time (e.g. 3 days a week, etc. negotiable). 45-minute breaks. Holidays and weekends off. |
Location: | On-site, hybrid, or remote options are available. Hybrid and remote options are only available if you live in Japan; due to legal issues, we cannot pay salaries to remote interns who live outside of Japan. If you join our internship program, you must come to Japan. In such cases, we offer support for travel expenses. Some internship projects may require on-site work. In this case, you will be assigned to one of our offices in Hongo or Shinagawa. |
Salary: | Full-time monthly salary ranges from 240,000 JPY to 480,000 JPY. Hourly rate is applied for part-time work. Social security and other benefits are provided according to the working conditions. Transportation and housing expenses are fully covered. In addition, other expenses necessary for research activities (PC, laptop, etc.) are fully supported. |
Language: | Japanese or English (English-only communication is also fine.) |
Others: | Two or more mentors with extensive research experience will provide in-depth support for each project. Computational resources (workstations, and server clouds with GPUs) and robotic facilities (robotic arms, various sensors, 3D printers, motion capture systems, and other prototyping and experimental equipment) are available. |
How to apply
Please fill the application form. We will first screen each application based on those information.
For other inquires, please contact internships@sinicx.com. We will first screen each application based on those information.
Those who pass the above screening will be interviewed remotely. Please prepare slides or other materials to introduce past research and development activities and achievements.
Please contact us at least three months in advance if you need a visa to enter Japan.