ANTI-SHAKE METHOD FOR PANORAMIC VIDEO, AND PORTABLE TERMINAL
The present invention relates generally to the field of video processing, and particularly relates to an anti-shake method for a panoramic video and a portable terminal. CMOS and CCD sensors are two types of image sensors that are currently commonly used. Both use photodiodes for photoelectric conversion to convert images into digital data. The main difference is the way in which digital data is transmitted. The charge data of each pixel in each line of the CCD sensor will be transferred to the next pixel in turn, and be outputted from the bottom thereof, then be amplified and outputted by the amplifier at the edge of the sensor; while in the CMOS sensor, each pixel will be adjacent to one amplifier and an A/D conversion circuit, and data is outputted in a way similar to a memory circuit. Generally, CMOS cameras use rolling shutter mode to obtain fisheye images through progressive exposure. CMOS chips share the workload through many parallel A/Ds, but the entire array in the sensor must be converted one line at a time, which results in a small time delay between the readings of each line. Each individual line is usually able to start the exposure of the next frame when the readout of the previous frame is completed. Although the speed is very fast, the time delay between the readings of each line will be converted into the delay between the start of the exposure of each line, exposure of each line will not happen at the same time. The result is that each line in the frame will be exposed for the same time length, but the exposure will start at a different time point. Two frames are allowed to overlap exposure, and the final frame rate depends on the speed at which the scrolling readout process can be completed. Such exposure mode causes a time difference between different lines of the same frame of image. If a handheld panoramic camera is used to shoot, when moving at high speeds, a jelly effect will occur due to the progressive exposure of the CMOS camera. The jelly effect is a problem that has not been well solved, especially for a VR panoramic video, and the anti-shake effect of video is poor. The problem solved by the present invention is to provide an anti-shake method for a panoramic video, a computer-readable storage medium and a portable terminal, which aims to solve the jelly effect caused by the shaking of the picture due to the rolling shutter mode of the CMOS chip, and the poor anti-shake effect of the camera. According to a first aspect, the present invention provides an anti-shake method for a panoramic video, comprises steps of: obtaining, in real time, an output video frame, a fisheye image corresponding to the output video frame, a timestamp of the pixel in the video frame, and a corresponding camera gyroscope timestamp; synchronizing the timestamp of the pixel in the video frame with the corresponding camera gyroscope timestamp, and calculating a rotation matrix of the camera movement in the camera gyroscope timestamp; smoothing the camera movement and establishing a coordinate system of a smooth trajectory; correcting the fisheye image distortion; and rendering the fisheye image by means of forward rendering to generate a stable video. According to a second aspect, the present invention provides an anti-shake method for a panoramic video, comprises steps of: S201, obtaining, in real time, an output video frame, a fisheye image corresponding to the output video frame, a timestamp of the pixel in the video frame, and a corresponding camera gyroscope timestamp; S202, synchronizing the timestamp in the video frame to the camera gyroscope acquisition system, and calculating a rotation matrix of the camera movement in the timestamp of the pixel in the current video frame as the approximate rotation matrix of the camera when collecting the current pixel; S203, projecting the output video frame pixels to the corresponding grid points of the spherical model, and using the approximate rotation matrix to rotate the corresponding grid points of the spherical model to the grid points in the sensor coordinate system; using the fisheye image distortion correction method to establish the relationship between the grid points in the sensor coordinate system and the pixels in the fisheye image to obtain approximate reverse-mapping pixels; S204, calculating the camera gyroscope timestamp of the approximate reverse-mapping pixels in the fisheye image, obtaining the accurate rotation matrix of the camera when collecting the current pixel, and using the accurate rotation matrix to rotate the corresponding grid points of spherical model again to obtain the second grid points in the sensor coordinate system; S205, using the fisheye image distortion correction method to establish the relationship between the second grid points in the sensor coordinate system and the pixels in the fisheye image to obtain accurate pixels in the fisheye image, and using the mapping relationship between the output video frame pixels and the accurate pixels in the fisheye image, rendering the fisheye image in reverse rendering to generate a stable video. According to a third aspect, the present invention provides a computer-readable medium that stores a computer program or computer programs, which when being executed by a processor or processors, cause the processor or processors to perform steps of the above-mentioned anti-shake method for a panoramic video. According to a fourth aspect, the present invention provides a portable terminal, comprising: one or more processors;
In the present invention, based on the timestamp of the pixel in the video frame, the fisheye image is pixel by pixel converted into the coordinate system of the smooth trajectory using the rotation matrix of the camera motion under the timestamp of the current video frame pixel in real time, and then rendered and output; thereby the distortion of the rolling shutter of the panoramic video sequence can be corrected. The invention can correct the image distortion caused by the CMOS rolling shutter and eliminate the jelly effect, and achieve better video image anti-shake effect; Further, in the present invention, according to the timestamp of the video frame pixel, calculating the rotation matrix of the camera movement under the timestamp of the current video frame pixel as the approximate rotation matrix of the camera when collecting the current pixel, projecting the output video frame pixels to the corresponding grid points of the spherical model, and using the approximate rotation matrix to rotate the corresponding grid points of the spherical model to the grid points in the sensor coordinate system; using the fisheye image distortion correction method to establish the relationship between the grid points in the sensor coordinate system and the pixels in the fisheye image to obtain approximate reverse mapping pixels, calculating the camera gyroscope timestamp of the approximate reverse-mapping pixels in the fisheye image, obtaining the accurate rotation matrix of the camera when collecting the current pixel, and using the accurate rotation matrix to rotate the corresponding grid points of spherical model again to obtain the second grid points in the sensor coordinate system; using the fisheye image distortion correction method to establish the relationship between the second grid points in the sensor coordinate system and the pixels in the fisheye image to obtain accurate pixels in the fisheye image, and using the mapping relationship between the output video frame pixels and the accurate pixels in the fisheye image, rendering the fisheye image in reverse rendering to generate a stable video. Thereby, the distortion of the rolling shutter of the panoramic video sequence can be corrected. The invention can correct the image distortion caused by the CMOS rolling shutter and eliminate the jelly effect, thereby achieve a better video image anti-shake effect. The foregoing objects, technical solutions and advantages of the invention will be much clearer from the following detail description taken with the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are only used to explain the present invention and are not intended to limit the present invention. In order to explain the technical solutions of the present invention, the following will be described by specific embodiments. Referring to S101, obtaining, in real time, an output video frame, a fisheye image corresponding to the output video frame, a timestamp of the pixel in the video frame, and a corresponding camera gyroscope timestamp; in the first embodiment of the present invention, S101 may specifically comprise the following steps: S1011, obtaining the output video frame and the fisheye image corresponding to the output video frame; S1012, obtaining the timestamp tframek S1013, obtaining the gyroscope timestamp tgyrok S102, synchronizing the timestamp of the pixel in the video frame with the corresponding camera gyroscope timestamp, and calculating a rotation matrix of the camera movement in the camera gyroscope timestamp; referring to S1021, calculating the timestamp tframek in formula (1), tframek S1022, synchronizing the timestamp of the pixel in the k video frame with the camera gyroscope timestamp, a conversion relationship between the two as formula (2): S1023, calculating a rotation matrix of the camera movement in the camera gyroscope timestamp at time tgyrok in formula (3), Rw2g(tgyrok S103, smoothing the camera movement and establishing a coordinate system of a smooth trajectory; in the first embodiment of the present invention, S103 may specifically comprise the following steps: S1031, where Psrepresenting the 3D coordinates in the camera coordinate system, let t=tgyrok in the formula (4), Rw2s(t) is a matrix of the camera coordinate system relative to the world coordinate system at time t, and {right arrow over (t0)} is a translation amount of the two frames before and after of the video frame of the camera; S1032, smoothing the camera movement, where setting {right arrow over (t0)}=0 and then obtaining S1033, establishing the coordinate system Pt′ of the smooth trajectory, performing a 3D grid rendering processing on the coordinate system Pt′ of the camera's smooth trajectory by OpenGl Rendering formula: Ptgl=K□Rnvp□Pt′; where K is a perspective matrix, Rnvpis a matrix of the movement direction manually controlled, and Ptglis the 3D coordinates rendered by OpenGl in the coordinate system of the smooth trajectory; and S1034, in the coordinate system of the smooth trajectory, obtaining the minimum square difference of any pixel in Pt′ in the two frames before and after by a formula: min(Pt+1′−Pt′)2+(Pt′−Ptw)2, where Pt′ and Pt+1′ are the coordinate systems of the smooth trajectory of the two frames before and after, respectively; setting Pt′=Pt+1′=Ptw, and obtaining the rendering formula as formula (5): calculating the 3D coordinates of the camera rendered to the coordinate system of the smooth trajectory by formula (5). S104, correcting the fisheye image distortion. In the first embodiment of the present invention, S104 may specifically comprise: where let obtaining a panoramic expanded image of Psin the camera coordinate system: in formula (6), u and v are the abscissa and ordinate of the panoramic expanded image in the camera coordinate system, respectively; a fisheye distortion model ρ=f(θ)=θ(1+k0θ2+k1θ4+k2θ6+k3θ8), where θ is an incident angle of the light, k0, k1, k2, and k3are the camera calibration coefficients, obtaining the corrected coordinates (x, y) of the fisheye image: in formula (7), r=√{square root over (Pxs2+Pys2)}, θ=atan(r, Pzs); the point (xc, yc) is a projection center of the sensor, x and y are the 2D coordinates of the projection position of the camera sensor respectively. S105, rendering the fisheye image by means of forward rendering to generate a stable video. In the first embodiment of the present invention, S105 may specifically comprise the following steps of: establishing a correspondence between the fisheye image and the grid points of the spherical model in the camera coordinate system by means of the fisheye image distortion correction method; generating the grid points of the spherical model in the coordinate system of the smooth trajectory by rendering; and performing panoramic expansion of the grid points of the spherical model in the coordinate system of the smooth trajectory to generate a stable video. Where, specifically, color information can be input by interpolation, and projection can generate perspective images or asteroid images. Referring to A1corresponds to A2, where A1is the coordinate in the fisheye image, A2corresponds to A1, is the coordinate of the grid point of the spherical model in the camera coordinate system; A3corresponds to A2, is the coordinate of the grid point of the spherical model in the coordinate system of the smooth trajectory generated by rendering; and A4is the coordinate point generated by the panoramic expansion projection of A3; Similarly, B1corresponds to B2, where B1is the coordinate in the fisheye image, B2corresponds to B1, is the coordinate of the grid point of the spherical model in the camera coordinate system; B3corresponds to B2, is the coordinate of the grid point of the spherical model in the coordinate system of the smooth trajectory generated by rendering; and B4is the coordinate point generated by the panoramic expansion projection of B3. In the first embodiment of the present invention, based on the timestamp of the pixel in the video frame, the fisheye image is pixel by pixel converted into the coordinate system of the smooth trajectory using the rotation matrix of the camera motion under the timestamp of the current video frame pixel in real time, and then rendered and output; thereby the distortion of the rolling shutter of the panoramic video sequence can be corrected. The invention can correct the image distortion caused by the CMOS rolling shutter and eliminate the jelly effect, and achieve better video image anti-shake effect. Referring to S201, obtaining, in real time, an output video frame, a fisheye image corresponding to the output video frame, a timestamp of the pixel in the video frame, and a corresponding camera gyroscope timestamp; in the second embodiment of the present invention, S201 may specifically comprise steps of: S2011, obtaining the output video frame and the fisheye image corresponding to the output video frame; S2012, obtaining the timestamp tframek S2013, obtaining the gyroscope timestamp tgyrok S202, synchronizing the timestamp of the pixel in the video frame to the camera gyroscope acquisition system, and calculating the rotation matrix of the camera movement in the timestamp of the pixel in the current video frame as the approximate rotation matrix of the camera when collecting the current pixel; in the second embodiment of the present invention, S202 may specifically include the following steps: S2021, synchronizing the timestamp of the pixel in the video frame to the camera gyroscope acquisition system, the conversion relationship between the two as shown in formula (8): in formula (8), ΔT0=tgyrok−tframek, tgyrokis the camera gyroscope timestamp at the beginning of the k video frame, and tframekis the timestamp of the pixel at the beginning of the k video frame; S2022, calculating a rotation matrix of the camera movement under the timestamp of the pixel in the current video frame at time tgyrokby formula (9): in formula (9), Rw2g(tgyrok) is the rotation matrix measured by the gyroscope sensor at time tgyrok, Rw2g(tgyrok) is a 3*3 matrix; and Rg2s(tgyrok) is the rotation matrix from the calibrated gyroscope coordinate system to the camera. S203, projecting the output video frame pixels to the corresponding grid points of the spherical model, and using the approximate rotation matrix to rotate the corresponding grid points of the spherical model to the grid points in the sensor coordinate system; using the fisheye image distortion correction method to establish the relationship between the grid points in the sensor coordinate system and the pixels in the fisheye image to obtain approximate reverse-mapping pixels; in the second embodiment of the present invention, S203 may specifically be: where let obtaining a panoramic expanded image of Psin the camera coordinate system: where u and v are the abscissa and ordinate of the panoramic expanded image in the camera coordinate system, respectively; a fisheye distortion model ρ=f(θ)=θ(1+k0θ2+k1θ4+k2θ6+k3θ8), where θ is an incident angle of the light, k0, k1, k2, and k3are the camera calibration coefficients, obtaining the corrected coordinates (x, y) of the fisheye image: where r=√{square root over (Pxs2+Pys2)} and θ=atan(r, Pzs); the point (xc, yc) is a projection center of the sensor, x and y are the 2D coordinates of the projection position of the camera sensor respectively. S204, calculating the camera gyroscope timestamp of the approximate reverse-mapping pixels in the fisheye image, obtaining the accurate rotation matrix of the camera when collecting the current pixel, and using the accurate is rotation matrix to rotate the corresponding grid points of spherical model again to obtain the second grid points in the sensor coordinate system; in the second embodiment of the present invention, the step of calculating the camera gyroscope timestamp of the approximate reverse-mapping pixels in the fisheye image, obtaining the accurate rotation matrix of the camera motion when collecting the current pixel, specifically is: using the approximate reverse-mapping pixels in the fisheye image to calculate the camera gyroscope timestamp of the pixel by the formula: in formula (10), Δt is a sampling time interval of the progressive scan of the video frame, and H is the number of lines of the image; further calculating the accurate rotation matrix Rw2s(tgyrok S205, using the fisheye image distortion correction method to establish the relationship between the second grid points in the sensor coordinate system and the pixels in the fisheye image to obtain accurate pixels in the fisheye image, and using the mapping relationship between the output video frame pixels and the accurate pixels in the fisheye image, rendering the fisheye image in reverse rendering to generate a stable video. Referring to projecting the output video frame pixel A4to the corresponding grid point A3of the spherical model, using the approximate rotation matrix to rotate A3to obtain A2in the sensor coordinate system, using the fisheye image distortion correction method to project A2onto the fisheye image to obtain the pixel A1, using the accurate rotation matrix to rotate A1to obtain A2in the sensor coordinate system, and then using the fisheye image distortion correction method to project A2to the fisheye image to obtain the accurate pixel A′1; using the mapping relationship between the output video frame pixel A4and the accurate pixel A′1in the fisheye image to reverse map and render; Similarly, projecting the output video frame pixel B4to the corresponding grid point B3of the spherical model, using the approximate rotation matrix to rotate B3to obtain B2in the sensor coordinate system, using the fisheye image distortion correction method to project B2onto the fisheye image to obtain pixel B1, using the accurate rotation matrix to rotate B1to obtain B′2in the sensor coordinate system, and then using the fisheye image distortion correction method to project B2to the fisheye image to obtain the accurate pixel B′1; using the mapping between the output video frame pixel B4and the accurate pixel B1in the fisheye image to reverse map and render; It should also be noted that the perspective view generated by the projection or the color information of the asteroid image is input by interpolation, and finally a stable video can be generated. In the second embodiment of the present invention, according to the timestamp of the video frame pixel, calculating the rotation matrix of the camera movement under the timestamp of the current video frame pixel as the approximate rotation matrix of the camera when collecting the current pixel, projecting the output video frame pixels to the corresponding grid points of the spherical model, and using the approximate rotation matrix to rotate the corresponding grid points of the is spherical model to the grid points in the sensor coordinate system; using the fisheye image distortion correction method to establish the relationship between the grid points in the sensor coordinate system and the pixels in the fisheye image to obtain approximate reverse-mapping pixels, calculating the camera gyroscope timestamp of the approximate reverse-mapping pixels in the fisheye image, obtaining the accurate rotation matrix of the camera when collecting the current pixel, and using the accurate rotation matrix to rotate the corresponding grid points of spherical model again to obtain the second grid points in the sensor coordinate system; using the fisheye image distortion correction method to establish the relationship between the second grid points in the sensor coordinate system and the pixels in the fisheye image to obtain accurate pixels in the fisheye image, and using the mapping relationship between the output video frame pixels and the accurate pixels in the fisheye image, rendering the fisheye image in reverse rendering to generate a stable video. Thereby, the distortion of the rolling shutter of the panoramic video sequence can be corrected. The invention can correct the image distortion caused by the CMOS rolling shutter and eliminate the jelly effect, thereby achieve a better video image anti-shake effect. The third embodiment of the present invention provides a computer-readable storage medium that stores a computer program or computer programs, which when being executed by a processor or processors, cause the processor or processors to perform steps of the anti-shake method for a panoramic video provided in the first or second embodiment of the present invention. A person of ordinary skill in the art may understand that all or part of the steps in the method of the above-mentioned embodiments can be implemented by a program or programs instructing relevant hardware. The program or programs can be stored in a computer-readable storage medium, and the storage media can be, such as ROM/RAM, magnetic disk, optical disk, etc. The above descriptions are only preferred embodiments of the present invention and are not intended to limit the present invention. Any modification, equivalent replacements and improvement made within the spirit and principle of the present invention shall be included in the protection of the present invention. The present invention provides an anti-shake method for a panoramic video, which comprises: obtaining, in real time, an output video frame, a fisheye image corresponding thereto, a pixel timestamp in the video frame, and a corresponding camera gyroscope timestamp; synchronizing the pixel timestamp in the video frame with the corresponding camera gyroscope timestamp, and calculating a rotation matrix of the camera movement in the camera gyroscope timestamp; smoothing the camera movement and establishing a coordinate system of a smooth trajectory; correcting the fisheye image distortion; and rendering the fisheye image by means of forward rendering to generate a stable video. According to the present invention, the rolling shutter distortion of a panoramic video sequence can be corrected, the image distortion caused by the rolling shutter of a CMOS can be corrected, and the rolling shutter is eliminated, thereby achieving a better anti-shake effect of a video image. 1. An anti-shake method for a panoramic video, comprising steps of:
obtaining, in real time, an output video frame, a fisheye image corresponding to the output video frame, a timestamp of the pixel in the video frame, and a corresponding camera gyroscope timestamp; synchronizing the timestamp of the pixel in the video frame with the corresponding camera gyroscope timestamp, and calculating a rotation matrix of the camera movement in the camera gyroscope timestamp; smoothing the camera movement and establishing a coordinate system of a smooth trajectory; correcting the fisheye image distortion; and rendering the fisheye image by means of forward rendering to generate a stable video. 2. The method of obtaining the output video frame and the fisheye image corresponding to the output video frame; obtaining the timestamp tframek obtaining the gyroscope timestamp tgyrok 3. The method of calculating the timestamp tframek in formula (1), tframekis the timestamp of the pixel at the beginning of the k video frame, Δt is a sampling time interval of the progressive scan of the video frame, and H is the number of lines of the image; synchronizing the timestamp of the pixel in the k video frame with the camera gyroscope timestamp, a conversion relationship between the two as formula (2):
in formula (2), ΔT0=tgyrok−tframek, tgyrokthe camera gyroscope timestamp at the beginning of the k video frame; and calculating a rotation matrix of the camera movement in the camera gyroscope timestamp at time tgyrok in formula (3), Rw2g(tgyrok 4. The method of where Psrepresenting the 3D coordinates in the camera coordinate system, let t=tgyrok in the formula (4), Rw2s(t) is a matrix of the camera coordinate system relative to the world coordinate system at time t, and {right arrow over (t0)} is a translation amount of the two frames before and after of the video frame of the camera; smoothing the camera movement, where setting {right arrow over (t0)}=0 and then obtaining establishing a coordinate system Pt′ of a smooth trajectory, performing a 3D grid rendering processing on the coordinate system Pt′ of the camera's smooth trajectory by a rendering formula: Ptgl=K□Rnvp□Pt′; where K is a perspective matrix, Rnvpis a rotation matrix of the view direction manually controlled, and Ptglis the 3D coordinates rendered in the coordinate system of the smooth trajectory; and in the coordinate system of the smooth trajectory, obtaining the minimum square difference of any pixel in Pt′ in the two frames before and after by a formula: min(Pt+1′−Pt′)2+(Pt′−Ptw)2, where Pt′ and Pt+1′ are the coordinate systems of the smooth trajectory of the two frames before and after, respectively; setting Pt′=Pt+1′=Ptw, and obtaining the rendering formula as formula (5):
calculating the 3D coordinates of the camera rendered to the coordinate system of the smooth trajectory by the formula (5). 5. The method of where let obtaining a panoramic expanded image of PsIll in the camera coordinate system: in formula (6), u and v are the abscissa and ordinate of the panoramic expanded image in the camera coordinate system, respectively; a fisheye distortion model ρ=f(θ)=θ(1+k0θ2+k1θ4+k2θ6+k3θ8), where θ is an incident angle of the light, k0, k1, k2, and k3are the camera calibration coefficients, obtaining the corrected coordinates (x, y) of the fisheye image: in formula (7), r=√{square root over (Pxs2+Pys2)}, and θ=atan(r, Pzs); the point (xc, yc) is a projection center of the sensor, x and y are the 2D coordinates of a projection position of the camera sensor respectively. 6. The method of establishing a correspondence between the fisheye image and grid points of the spherical model in the camera coordinate system by means of a fisheye image distortion correction method; generating grid points of the spherical model in the coordinate system of the smooth trajectory by rendering; and performing panoramic expansion for the grid points of the spherical model in the coordinate system of the smooth trajectory to generate a stable video. 7. A non-transitory computer-readable medium that stores one or more computer programs including a set of computer-executable instructions, wherein when being executed by one or more processors, cause the one or more processors to perform steps of the anti-shake method for a panoramic video of 8. A portable terminal, comprising:
one or more processors; a memory; and one or more computer programs including a set of computer-executable instructions, where the set of computer-executable instructions are stored in the memory and are configured to be executed by the one or more processors, wherein when being executed by the one or more processors, cause the one or more processors to perform steps of an anti-shake method for a panoramic video: obtaining, in real time, an output video frame, a fisheye image corresponding to the output video frame, a timestamp of the pixel in the video frame, and a corresponding camera gyroscope timestamp; synchronizing the timestamp of the pixel in the video frame with the corresponding camera gyroscope timestamp, and calculating a rotation matrix of the camera movement in the camera gyroscope timestamp; smoothing the camera movement and establishing a coordinate system of a smooth trajectory; correcting the fisheye image distortion; and rendering the fisheye image by means of forward rendering to generate a stable video. 9. An anti-shake method for a panoramic video, comprising steps of:
S201, obtaining, in real time, an output video frame, a fisheye image corresponding to the output video frame, a timestamp of the pixel in the video frame, and a corresponding camera gyroscope timestamp; S202, synchronizing the timestamp of the pixel in the video frame to the corresponding camera gyroscope acquisition system, and calculating a rotation matrix of the camera movement in the timestamp of the pixel in the current video frame, as an approximate rotation matrix of the camera when collecting the current pixel; S203, projecting the output video frame pixels to the corresponding grid points of the spherical model, and using the approximate rotation matrix to rotate the corresponding grid points of the spherical model to the grid points in the sensor coordinate system; using a fisheye image distortion correction method to establish a relationship between the grid points in the sensor coordinate system and the pixels in the fisheye image to obtain approximate reverse-mapping pixels; S204, calculating the camera gyroscope timestamp of the approximate reverse-mapping pixels in the fisheye image, obtaining the accurate rotation matrix of the camera when collecting the current pixel, and using the accurate rotation matrix to rotate the corresponding grid points of spherical model again to obtain the second grid points in the sensor coordinate system; and S205, using the fisheye image distortion correction method to establish the relationship between the second grid points in the sensor coordinate system and the pixels in the fisheye image to obtain accurate pixels in the fisheye image, and using a mapping relationship between the output video frame pixels and the accurate pixels in the fisheye image, rendering the fisheye image in reverse rendering to generate a stable video. 10. The method of obtaining the output video frame and the fisheye image corresponding to the output video frame; obtaining of the timestamp tframek obtaining the gyroscope timestamp tgyrok 11. The method of synchronizing the timestamp of the pixel in the video frame to the corresponding camera gyroscope acquisition system, a conversion relationship between the two as shown in formula (8):
where in formula (8), ΔT0=tgyrok−tframektgyrokis the camera gyroscope timestamp at the beginning of the k video frame, and tframekis the timestamp at the beginning of the k video frame; and calculating a rotation matrix of the camera movement under the timestamp of the pixel in the current video frame at time tgyrokby formula (9):
where in formula (9), Rw2g(tgyrok) is a rotation matrix measured by the gyroscope sensor at time tgyrok, and Rg2s(tgyrok) is a rotation matrix from the calibrated gyroscope coordinate system to the camera. 12. The method of where let obtaining a panoramic expanded image of Psin the camera coordinate system: where u and v are the abscissa and ordinate of the panoramic expanded image in the camera coordinate system, respectively; a fisheye distortion model ρ=f(θ)+θ(1+k0θ2+k1θ4+k2θ6+k3θ8), where θ is an incident angle of the light, k0, k1, k2, and k3are the camera calibration coefficients, obtaining the corrected coordinates (x, y) of the fisheye image: where r=√{square root over (Pxs2+Pys2)} and θ=atan(r, Pzs); the point (xc, yc) is a projection center of the sensor, x and y are the 2D coordinates of a projection position of the camera sensor respectively. 13. The method of using the approximate reverse-mapping pixels in the fisheye image to calculate the camera gyroscope timestamp of the pixel by the formula: where in formula (10), Δt is a sampling time interval of the progressive scan of the video frame, and H is the number of lines of the image; then calculating the accurate rotation matrix Rw2s(tgyrok 14. The method of projecting the output video frame pixel A4to the corresponding grid point A3of the spherical model, using the approximate rotation matrix to rotate A3to obtain A2in the sensor coordinate system, using the fisheye image distortion correction method to project A2onto the fisheye image to obtain the pixel A1, using the accurate rotation matrix to rotate A1to obtain A′2in the sensor coordinate system, and then using the fisheye image distortion correction method to project A′2to the fisheye image to obtain the accurate pixel A′1; using the mapping relationship between the output video frame pixel A4and the accurate pixel A′1in the fisheye image to render the fisheye image in reverse rendering to generate a stable video; similarly, projecting the output video frame pixel B4to the corresponding grid point B3of the spherical model, using the approximate rotation matrix to rotate B3to obtain B2in the sensor coordinate system, using the fisheye image distortion correction method to project B2onto the fisheye image to obtain pixel B1, using the accurate rotation matrix to rotate B1to obtain B′2in the sensor coordinate system, and then using the fisheye image distortion correction method to project B′2to the fisheye image to obtain the accurate pixel B′1; using the mapping relationship between the output video frame pixel B4and the accurate pixel B′1in the fisheye image to render the fisheye image in reverse rendering to generate a stable video. 15. A non-transitory computer-readable medium that stores one or more computer programs including a set of computer-executable instructions, wherein when being executed by one or more processors, cause the one or more processors to perform steps of the anti-shake method for a panoramic video of 16. A portable terminal, comprising:
one or more processors; a memory; and one or more computer programs including set of computer-executable instructions, where the set of computer-executable instructions are stored in the memory and are configured to be executed by the one or more processors, wherein being executed by the one or more processors execute, cause the one or more processors to perform steps of the anti-shake method for a panoramic video of FIELD OF THE INVENTION
BACKGROUND OF THE INVENTION
SUMMARY OF THE INVENTION
a memory; and
one or more computer programs where the one or more computer programs are stored in the memory and are configured to be executed by the one or more processors, and when executed by the one or more processors, cause the one or more processors to perform the above-mentioned anti-shake method for a panoramic video.
BRIEF DESCRIPTION OF THE DRAWINGS
DETAILED DESCRIPTION OF THE INVENTION
First Embodiment
Second Embodiment
Third Embodiment
Fourth Embodiment


