PRINT

Incorporating Error Correction and 3D Coordinate Conversion in 3D TOF Sensor Modules

TANAKA Hiroyuki
Module Development Dept.
Product Development Division
Business Management Division H.Q.
Electronic and Mechanical Components Company
Specialty: Electronic Information Engineering
ISHII Akihiro
Module Development Dept.
Product Development Division
Business Management Division H.Q.
Electronic and Mechanical Components Company
Specialty: Electrical and Electronic Engineering

In recent years, 3D TOF sensors that can acquire three-dimensional information with a wide viewing angle have attracted attention for the various service robots, such as autonomous driving robots used in factories and nursing care robots. Since 3D TOF sensors can acquire spatial information, they are expected to replace displacement sensors and 2D cameras that have been used so far. In the TOF method, distance is measured from the flight time of light. TOF sensors read the image information as voltage output and calculate the distance from this output data as in a general CMOS image sensor. Since the calculated distance data includes errors due to a variety of different factors for each sensor, it is necessary to perform appropriate correction processing for each sensor in order to obtain high accuracy. Furthermore, since the distance information directly obtained from the sensor is one dimensional, it is necessary to convert the distance and angle information into three-dimensional coordinates. Incorporating post-processing, such as distance calculation, correction, and conversion, into the user窶冱 system creates a heavy burden on the user窶冱 development. In order to execute these processes in the sensor, we developed a 3D TOF sensor module equipped with a mechanism that incorporates the common arithmetic expressions required for the processes and sets the optimum parameters for each sensor at the time of shipment. As a result, we realized a function that can acquire 3D information with high accuracy with a single command.

1. Introduction

In recent years, the use of various service robots, such as autonomous driving robots, used in factories and nursing care robots are expanding because of the decreasing working population as a result of the declining birthrate and aging population, and the 3D TOF sensors that can acquire 3D information with a wide viewing angle have been attracting attention. Since the 3D TOF sensor can acquire spatial information, it is expected to replace the displacement sensors and 2D cameras that have been used so far1).

TOF is an abbreviation of 窶弋ime of Flight窶 where a distance is measured based on the flight time of the light from the time the light is emitted from the source to the time it returns to the sensor after being reflected on the target object. The TOF sensor reads the imaged information as voltage output and calculates the distance from this output data in the same way as a general CMOS image sensor. Such distance data are converted to the 3D information. Generally, software development kits (SDKs) are available for the calculation of 3D information from distance data, but correction for the respective TOF sensor is required for errors in the individual sensor elements, errors due to assembly of the lenses, and errors due to temperature changes, and such corrections must be applied for accurate determination of distance, which the user is required to develop with a significant burden.

In order to reduce the burden by allowing execution of these correction processes within the TOF sensor, we developed the technology to realize high accuracy incorporating the arithmetic expressions required for calculation, correction, and conversion of the distance in the sensor and allowing use of the optimum parameters for the respective sensor.

2. Principle of distance detection by TOF

2.1 How TOF works

A distance is determined by measuring the time period from the time the light is emitted from the source to the time when it is received by the detector.

Distance d can be determined by the following Equation (1) using the flight time tTOF and the speed of light c.

d=c/2tTOF
シ1シ

There are two major measuring methods of the flight time of light: direct method and indirect method.

By the direct method, distance is determined by the flight time of light measuring the time from the time the light is emitted from the source to the time when the reflected light by the target object is received by the detector.

By the indirect method, distance is determined from the phase difference between the emitted light blinking periodically and the reflected light from the target object2).

2.2 Phase difference method

Our TOF sensor uses the indirect method (phase difference method). Fig. 1 shows the principle of the phase difference method. Infrared light emitted from the photo transmitter (emitted light) is reflected by the target object, and the reflected infrared light (received light) is detected by the photo receiver. The distance to the target object can be determined by the phase difference pd between the emitted light and the received light. The timing when the emitted light is transmitted and the received light is detected is controlled by the signal input to the photo transmitter and the photo-receiver from the control circuit. When the received light is detected at the time of the phase angle 0, 90, 180, and 270 degrees delayed from the emitted light, the relationship between the quantity of light received and the phase difference pd at the respective time is as shown in Fig. 2. Where vector x is the differential vector between the received light quantity with a 0-degree phase delay and with a 180-degree phase delay, vector y is the differential vector between the received light quantity with a 90-degree phase delay and with a 270-degree phase delay, and phase difference pd corresponds to the angle between the synthetic vector x + y and the X-axis. The magnitude of the synthetic vector x + y corresponds to the intensity of the received light (amplitude).

Fig. 1 Principle of distance detection by TOF
Fig. 1 Principle of distance detection by TOF
Fig. 2 Relation between received light quantity and phase difference pd
  • x : Difference vector between the received light quantities at phase difference of 0 degrees and of 180 degrees
  • y : Difference vector between the received light quantities at phase difference of 90 degrees and of 270 degrees
Fig. 2 Relation between received light quantity and phase difference pd

Received light quantity is obtained based on the accumulated electric charge generated by light in the element of the photo receiver. Fig. 3 shows schematic diagram how the electric charge is accumulated.

Fig. 3 Mechanism of electric charge accumulation by received light
Fig. 3 Mechanism of electric charge accumulation by received light

Control signal C1 is generated without a phase delay from the emitted light. The electric charge Q1 corresponds to the quantity of light received during the period from rising to falling of C1. Similarly, the electric charges Q2, Q3, and Q4 correspond to the quantity of light received during the period from rising to falling of C2, C3, and C4, respectively.

There is a 90-degree phase delay between C1 and C3, C3 and C2, and C2 and C4, respectively. As the quantity of the received light corresponds to the accumulated electric charge, and as there is the relationship shown in Fig. 2 between the phase difference pd and the quantity of received light, the phase difference pd can be determined using Equation (2) using the four electric charges of Q1, Q2, Q3, and Q4 obtained.

pd=arctan(Q3-Q4/Q1-Q2)
シ2シ

The maximum measurable distance is a half-wavelength of the emitted light and is expressed as c /2f . Where c is the speed of light and f is the frequency of the emitted light. The distance d corresponding the phase difference pd can be determined as the value of the maximum distance times pd /2π as indicated in Equation (3)3).

d=c/2fpd/2π
シ3シ

The quantity of the electric charge accumulated by the receiving light corresponds to the intensity of the received light (amplitude) Amp, which is the magnitude of the synthetic vector x + y and can be determined from Equation (4).

Amp=竏(Q2-Q1)2+(Q4-Q3)2/2
シ4シ

The larger value of Amp means a larger quantity of reflected light, and a more accurate distance determination becomes possible.

3. The technological subject

In the case of an autonomous driving robot to which TOF sensors will be used, the robot may collide with obstacles due to inaccurate distances determined if the accuracy of measurement is poor. In addition, erroneous detection of obstacles may result in the application where identification of a person or other object or identification of state is required. High accuracy in determining distance is required to prevent such incidents.

Because the distance is determined using Equations (2) and (3) according to the principle of distance determination in Section 2, the quantity of the electric charge obtained must be accurate.

The quantity of the electric charge depends on the image sensor and the condenser lens of the photo receiver, as well as the light emission circuit of the light transmitter. The characteristics of these elements vary, and there are error factors depending on the lot and the individual unit, which are difficult to eliminate completely by hardware, and correction is necessary for respective individual unit.

Application of such correction for individual units by the user is burdensome because identification of the error factors and the implementing of measures become necessary. It is necessary to incorporate the mechanism to make the required correction of various characteristics for the individual units, to reduce the burden on the user, and to maximize the effect of correction. In addition, there are other issues, such as how to increase the measurable distance and how to measure the variety of target objects, because the quantity of the electric charge accumulated depends on the distance and reflectivity of the target object.

The issues related to the various types of errors that require correction are explained below.

3.1 Subject related to linearity of distance determination

The phase difference pd determined by Equation (2) changes linearly with the change in the phase difference between the emitted and received light when the infrared light emitted has an ideal sine waveform. On the other hand, when the infrared light emitted has a rectangular waveform, pd changes along an undulating line. Fig. 4 shows the graphs of simulation results calculating the phase difference pd using Equation (2) with respect to the phase difference between the emitted light and the received light when they have a sine waveform and a rectangular waveform. As the phase difference pd changes along the undulating line, the distance d determined from pd by Equation (3) also changes along the undulating line.

Fig. 4 Calculation results of phase difference pd due to differences in the emitted and received light wave form
Fig. 4 Calculation results of phase difference pd due to differences in the emitted and received light wave form
Fig. 4 Calculation results of phase difference pd due to differences in the emitted and received light wave form

Fig. 5 is the graph showing the distance output vs. the actual distance. The horizontal axis represents the actual distance, the vertical axis represents the distance output, and the blue line is the plot of the distance output when the emitted light has a rectangular waveform. It is shown that the actual distance output line has some deflection from the ideal distance output of the red line. The graph shows that the distance output is smaller or larger than the actual distance depending on the position of the target object under measurement, and to determine the distance stably, it is necessary to establish linearity with the distance.

Fig. 5 Deflection in distance output
Fig. 5 Deflection in distance output

3.2 Issues related to dispersion of distance output between sensor elements

Even when linearity is established for each sensor element, consistent distance output cannot be obtained for the same distance on the plane, if there is a difference in the distance outputs between sensor elements. Fig. 6 shows the dispersion of the distance outputs when a flat plane is measured. Fig. 6 (a) shows the environment of the measurement where the target plane is placed at the position 1 meter from the TOF sensor. The blue plots in Fig. 6 (b) show the distance outputs from the sensor elements at the positions on the image sensor corresponding to the blue line at the center of the plane shown in Fig. 6 (a). The horizontal axis represents the positions of the respective ツア40 sensor elements from the center of the blue line, and the vertical axis represents the distance outputs from respective sensor elements. When the perfect flat plane is measured, the distance output will form a straight line shown in red ideally, but the actual outputs are dispersed from such a red line.

Fig. 6 Dispersion of distance output when a plane is measured
(a) Environment of Measurement
Fig. 6 Dispersion of distance output when a plane is measured
(b) Sensor Element Position and Distance Output
Fig. 6 Dispersion of distance output when a plane is measured

Such dispersions are produced mainly for the following reasons.

  • Photoelectric conversion sensitivity of each sensor element
  • Error in A-D conversion of electric charge by the sensor element

3.3 Issues related to fluctuation of distance output due to temperature change

Fig. 7 is a graph showing the distance output when the temperature is changed with the position of the target plane unchanged. The horizontal axis represents the temperature and the vertical axis represents the distance output in blue plots at the respective temperature conditions. The ideal condition is the constant distance output independent from temperature indicated by the red line in Fig. 7, but the distance output tends to increase with an increase in temperature. The issue is to reduce the fluctuations of the output distance due to temperature changes in order to allow stable distance measurement independent from temperature changes.

Fig. 7 Fluctuation of distance output due to temperature
Fig. 7 Fluctuation of distance output due to temperature

The fluctuation due to temperature change is produced mainly for the following reasons.

  • Delay of light emission timing by the control circuit
  • Delay of light receiving timing by the control circuit
  • Photoelectric conversion sensitivity of each sensor element

3.4 Issues related to distance measurement range and reflectivity of the target object

As discussed in Subsection 2.2, while the measurable maximum distance depends on the frequency of the emitted light, to make the measurement possible, an adequate electric charge needs to be accumulated. Accordingly, the distance that can be actually measured is determined by the time required to accumulate the electric charge by the image sensor, and such electric charge depends on the quantity of light received. As indicated in Equation (4), an electric charge can be expressed by the intensity of received light (amplitude) Amp, and the Amp is determined as normalized to 0-255. Amp becomes smaller when the distance to the target object increases and/or the reflectivity of the target is low and becomes larger when the distance to the target decreases and/or the reflectivity of the target is high. So an adequate quantity of light received is required by increasing the time required to accumulate an electric charge when the target object at a greater distance or with low reflectivity is measured.

To the contrary, the time required to accumulate an electric charge needs to be reduced when the target object at a short distance or with high reflectivity is measured. This is because the electric charge that can be accumulated by the sensor element of the image sensor is limited, and the accumulated electric charge will be saturated when accumulation occurs for too long time, which makes determination of the distance impossible.

Fig. 8 shows the environment where the difference in the measurable ranges by accumulation time is measured. The measurement is made placing the white paperboard with reflectivity of 56.3% and the black paperboard with reflectivity of 5.5% at a distance of 1 m, 2 m, and 4 m from the TOF sensor. Fig. 9 shows difference in the measurable range by different accumulation time. Fig. 9 (a) shows the measurement results with the accumulation time of 3200 μs, and Fig. 9 (b) shows the measurement results with the accumulation time of 200 μs. In Fig. 9 (a) and (b), measurements from a short distance to a long distance are represented by changing the color from red to blue, and when Amp exceeds 255, the color used is violet to indicate that the electric charge is saturated, and when Amp is 窶0,窶 the color used is black to indicate that the distance cannot be correctly determined. In Fig. 9 (a), the target at 4 m distance is correctly determined, but at the distance of 1 m and 2 m, the distance to the white paperboard that has greater reflectivity compared to the black paperboard cannot be determined correctly because the electric charge is saturated. Saturation of the electric charge is not observed in Fig. 9 (b), but in the distance of 1 m, 2 m, and 4 m, the distance to the black paperboard with lower reflectivity compared with the white paperboard is not correctly determined.

Fig. 8 Measuring environment of measurable range
Fig. 8 Measuring environment of measurable range
Fig. 9 Difference of measurable range by different accumulation time
Fig. 9 Difference of measurable range by different accumulation time

This means that the accumulation time for an electric charge needs to be determined depending on the distance and reflectivity of the target object to be measured, which means that measurement of the target objects at a short distance and long distance or the target object with low reflectivity and high reflectivity cannot be made at the same time, and the issue is how to increase the measurement range and the range of reflectivity of the target object.

3.5 Issues related to three-dimensional conversion

While the 3D TOF sensor can obtain distance information within the imaging range similarly to the common CMOS image sensor, it obtains only one dimensional distance information between the sensor element and the target object. For the application to measure the size and shape of the target object or to measure the distance between multiple target objects, it is necessary to obtain 3D information.

In order to convert 1D distance information to 3D coordinate information, information of the incident angle of the received light to each sensor element through the lens is required, in addition to the distance information to each sensor element. In Fig. 10, the image sensor equipped with the lens is located at the origin of the XYZ-coordinate system, and the optical axis of the lens is on the Z-axis. The magnitude of r corresponds to the distance obtained by the image sensor. The direction of r is the direction from which light reflected from the target object is incident to the sensor element of the image sensor that is uniquely determined by θ and ホヲ for each element.

Fig. 10 Correspondence of 1D distance information to the 3D coordinates
Fig. 10 Correspondence of 1D distance information to the 3D coordinates

The values of θ and ホヲ include angular errors caused by the shift of the optical axis and/or the deflection created when the lens is assembled in an individual unit. Fig. 11 shows the output Z-coordinate obtained by the values of θ and ホヲ when the optical axis is shifted by 20 sensor elements in the X-axis direction. The flat plane is measured in the same manner as shown in Fig. 6 (a), and attention needs to be directed to the blue line in the center. The graph in Fig. 11 is created by plotting the output coordinate Z in blue with the horizontal axis representing the respective sensor element positions within ツア20 elements from the center along the horizontal axis and the output coordinate on the vertical axis. Ideally, the constant output coordinate Z should be obtained independent of the position of the sensor element as indicated by the red line in Fig. 11, but it is changed, depending on the position of the sensor element due to the angular errors caused by the shift of the optical axis. The issue is to convert the measurement data to 3D coordinates taking effect of the angular errors into consideration.

Fig. 11 Fluctuation of output coordinate Z due to shift of optical axis
Fig. 11 Fluctuation of output coordinate Z due to shift of optical axis

4. Correction method and its effectiveness

As discussed in Section 3 a distance output contains errors due to various factors, and a correction mechanism tailored to individual unit is required because these errors are peculiar to individual units. Until now, such corrections were made using PC software common to every unit, and the corrections made were not optimized to the respective units.

The arithmetic block incorporated in the TOF sensor we developed consists of the SoC, external memory, and built-in ROM as shown in Fig. 12. The arithmetic expression for correction and conversion is incorporated into the SoC. Parameter information unique to individual unit is written in the built-in ROM at the time of shipment of the sensor using the table data containing the optimum parameters. When the TOF sensor is started by the user, the table data are read out to the external memory from the built-in ROM, and computation of the distance, correction, and coordinate conversion is made by the SoC using the phase data from the image sensor.

Fig. 12 Configuration of Arithmetic block
Fig. 12 Configuration of Arithmetic block

Solutions for respective subjects to establish accuracy of distance discussed in Section 3 and their effects are explained below.

4.1 Linearity correction of distance

The distance before correction is applied has a certain deflection from the ideal straight line as shown in Fig. 5 in Subsection 3.1. To make the distance obtained close to the ideal straight line, the driving circuit to convert the rectangular wave of the emitted light to the sine wave is incorporated. However, in a real application, it is difficult to obtain an ideal sine wave, and correction to the distance obtained needs to be made.

The parameter used to compute the deflection in measurement required for correction is obtained for the respective units, which is summarized in the table and incorporated into the sensor. The unique expression common to the respective sensors is incorporated for computation of deflection. As deflection is obtained for the respective sensor using a table unique to the sensor, the distance corrected most appropriately can be obtained from the respective sensors.

Fig. 13 (a) is the graph showing the relationship between the actual distance and the distance output, where the horizontal axis represents the actual distance from the image sensor to the target object, and the vertical axis represents the distance output from the image sensor. The distance output without correction applied is plotted in blue as in the graph of Fig. 5, and the distance with correction applied is plotted in orange. In Fig. 13 (b), error of the distance output in Fig. 13 (a) from the actual distance is plotted along the vertical axis. When the distances without correction and with the correction applied are compared, the linearity of the distance with the correction applied is improved, and the error is also reduced.

Fig. 13 Results of linearity correction
(a) Actual Distance and Distance Output
Fig. 13 Results of linearity correction
(b) Error between Actual Distance and Distance Output
Fig. 13 Results of linearity correction

4.2 Correction of dispersion of distance output between sensor elements

Smoothing of the distances obtained by the respective sensors using the spatial filter is made to reduce the dispersion of the distance after the linearity of the distances from respective sensor elements is established. Fig. 14 shows the comparison of dispersion of distance outputs from sensor elements with spatial filtering applied and not applied. The flat plane is measured in the same manner as shown in Fig. 6 (a), and attention is directed to the blue line in the center. In the graph of Fig. 14, the horizontal axis represents the positions of the respective ツア40 sensor elements from the center of the line, and the vertical axis represents the distance outputs from the respective sensor elements. The plots in blue represent the distance without filtering, and the plots in orange represent the distance with filtering applied. The graph shows that dispersion of the distance outputs between the sensor elements reduces when filtering is applied compared with the distance without the filtering applied.

Fig. 14 Effect of Correction of Dispersion between Sensor Elements
Fig. 14 Effect of Correction of Dispersion between Sensor Elements

4.3 Correction of fluctuation of distance outputs due to temperature change

As discussed in Subsection 3.3, the delay of light emission and reception timings due to temperature changes will produce fluctuations in the distance output. Our TOF sensor is equipped with temperature sensors to obtain the temperature at the photo transmitter and at the photo receiver (image sensor).

Fluctuations in the distance output caused by temperature changes vary depending on variations in the circuits in the photo transmitter and the photo receiver.

The parameter used to compute distance fluctuations that depend on the temperature of the photo transmitter and the parameter used to compute distance fluctuations that depend on the temperature of the photo receiver required for correction are obtained for the respective sensors and stored in the sensor at the time of shipment. The unique expression common to the respective sensors is incorporated for computation of distance fluctuations. Because distance fluctuations are obtained for the respective sensors using the parameters unique to the sensor and the temperature of the photo transmitter and the photo receiver, the distance corrected to the most appropriate value can be obtained from the respective sensors, when the sensors are used.

Fig. 15 is the graph showing the relationship between the temperature and the distance output, the horizontal axis represents the temperature of the image sensor, and the vertical axis represents the distance output. The distance without correction applied is plotted in blue, and the results after correction applied are plotted in orange as the corrected distance the same as the graph of Fig. 7. When the distance without correction and the distance with correction applied are compared, the stable distance with correction applied independent from temperature can be obtained compared with the distance without correction.

Fig. 15 Effectiveness of temperature correction
Fig. 15 Effectiveness of temperature correction

4.4 HDR processing to expand distance and reflectivity ranges

As the target objects located at a long and short distance or with low and high reflectivity cannot be measured at the same time as discussed in Subsection 3.4, the distance image taken in the short light accumulation time for the target object at a short distance or with high reflectivity and the distance image taken in the long light accumulation time for the target object at a long distance or with low reflectivity are synthesized.

In high dynamic range (HDR) processing, the technique used to expand the measurable distance and reflectivity ranges, two images are compared at the respective sensor element, and the image without saturation is used when either element is saturated, and the image with higher quantity of light received is used when no saturation occurs in both images.

Fig. 16 shows the effectiveness of HDR processing. Although the target objects located at a long and short distance or with low and high reflectivity cannot be measured at the same time in Fig. 9 of Subsection 3.4, all the target objects located at a long and short distance and with low and high reflectivity can be measured in this case, which means the dynamic range for distance and reflectivity has expanded.

Fig. 16 Effectiveness of HDR processing
Fig. 16 Effectiveness of HDR processing

It is anticipated that a number of target objects with various reflectivity will be located at various locations in the actual environment of measurement, and the use of this HDR processing will reduce restrictions in the environment of measurement using the TOF sensor.

4.5 3D conversion

As discussed in Subsection 3.5, when 1-D distance information is converted to 3D coordinates, it is necessary to consider the effect of angular errors. Angles θ and ホヲ used to compute 3D coordinates X, Y, and Z as shown in Fig. 10 are obtained for the respective sensor elements, which is summarized in the table and incorporated in the sensor at the time of shipment. For computation of X, Y, and Z, unique expressions common to the respective sensor elements are incorporated. As X, Y, and Z are obtained for the respective sensor elements using a θ and ホヲ table unique to each sensor element, the distance corrected most appropriately can be obtained from the respective sensors.

Fig. 17 shows comparison of the output coordinate Z converted using the θ and ホヲ table without the shift of the optical axis and the output coordinate Z converted using the θ and ホヲ table with the shift of the optical axis shown in Fig. 11 considering the shift of the optical axis of the lens. As shown in the graph, output coordinate Z without the shift of the optical axis is constant and independent of the position of the sensor element, which is ideal.

Fig. 17 Output coordinate Z after 3D conversion
Fig. 17 Output coordinate Z after 3D conversion

4.6 Summary of effectiveness of corrections

Table 1 shows comparisons of distance errors before and after corrections are applied for various items discussed in Subsections 4.1, 4.2, 4.3, and 4.5. Distance errors are expressed by the percentage of the error to the true distance, and accurate distance output within 2% distance error can be obtained for any correction applied.

Table 1 Summary of distance error improvement after various corrections are applied
Type of Correction Distance Error
(Without Correction) [%]
Distance Error
(With Correction) [%]
Correction of Distance Linearity 8.3 1.8
Correction of Dispersion between Sensor Elements 2.4
(Standard Deviation: 0.98)
0.6
(Standard Deviation: 0.33)
Temperature Correction
(0~60ツーC)
67.8 0.11
3D Conversion 2.2 0.6

In addition, with respect to the HDR processing discussed in Subsection 4.4, it is shown that distance measurement becomes possible for wide range of distance 1 m to 4 m and of reflectivity 5.5% to 56.3% from comparison of distance and reflectivity with and without HDR processing applied shown in Table 2.

Table 2 Summary of distance and reflectivity range improvement by HDR processing
Target for Distance Measurement Without HDR Processing With HDR Processing
Distance [m] Reflectivity [%] Accumulation Time 3200 μs Accumulation Time 200 μs
1.0 5.5
56.3
2.0 5.5
56.3
4.0 5.5
56.3
笳: Distance measurement possible, テ: Distance measurement impossible

5. Conclusion

As the 3D TOF sensor module has variations and error factors peculiar to individual unit and in the application where high accuracy is needed, correction by the user is required.

To solve such problems, we developed the function where 3D information can be determined accurately from a single command by incorporating the mechanism to write the parameters and the arithmetic expressions suitable to individual sensor units.

Use of the 3D sensor in complicated operating environments, such as public facilities, hospitals, railway stations, and commercial facilities, is expected to expand, in addition to the industrial applications for the transfer of goods in factories and warehouses because of increasing needs to save labor caused by a decreasing working population and to avoid the 3Cs due to COVID-191). The development we have made will bring a solution to such social needs.

We intend to continue the study to increase the detectable distance range, angle of view, and reflectivity range to respond to growing customer needs.

References

1シ
OMRON Corporation, 窶廾mron launches B5L series embedded TOF sensor module with three-dimensional distance measurement capability of people and objects.窶 https://www.omron.com/global/en/media/2020/09/c0915.html (accessed Sep. 1, 2020).
2シ
K. Yasutomi and S. Kawahito, 窶弋ime-of-Flight Camera,窶 (in Japanese), Trans. Inst. Image Inf. Telev. Eng., vol. 70, pp. 880-882, 2016.
3シ
M. Hansard, S. Lee, O. Choi, and R. Horaud, Time of Flight Cameras: Principles, Methods, and Applications. Springer, 2012, pp. 8-10.

The names of products in the text may be trademarks of each company.