Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Perspective Camera Model: Intrinsic and Extrinsic Parameters, Slides of Computer Vision

The perspective camera model, which can be represented by a 3x3 matrix of intrinsic parameters and a 3x4 matrix of extrinsic parameters. The intrinsic parameters include the focal length and principal point offset, while the extrinsic parameters represent the camera's position and orientation in the world. The document also includes examples of translation and rotation transformations.

What you will learn

  • How does the camera center and rotation affect the perspective camera model?
  • How are the internal and external camera parameters derived?
  • What are the internal and external camera parameters in the perspective camera model?
  • What is the role of intrinsic and extrinsic parameters in the perspective camera model?
  • How is the perspective camera model applied in real-world scenarios like aircraft and surveillance cameras?

Typology: Slides

2017/2018

Uploaded on 02/14/2018

hafiz_arslan
hafiz_arslan 🇬🇧

2 documents

1 / 13

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
CCD Camera
We have assumed same units for world & image coordinates
In a CCD camera, image coordinates are measured in pixels
Some CCD cameras also have non-square pixels
We can convert to pixel units as
where mx and my are scale factors of pixels per unit length,
needed to convert to pixel dimension
mx = #of pixels in x direction / size of CCD array in x direction
my = #of pixels in y direction / size of CCD array in y direction
(x0, y0) is principal point offset in pixel dimensions
K=2
4
mxf0x0
0myfy
0
001
3
5
x0=mxpx,y
0=mypy
pf3
pf4
pf5
pf8
pf9
pfa
pfd

Partial preview of the text

Download Perspective Camera Model: Intrinsic and Extrinsic Parameters and more Slides Computer Vision in PDF only on Docsity!

CCD Camera

  • We have assumed same units for world & image coordinates
  • In a CCD camera, image coordinates are measured in pixels
  • Some CCD cameras also have non-square pixels
  • We can convert to pixel units as

where m

x

and m

y

are scale factors of pixels per unit length,

needed to convert to pixel dimension

  • mx = #of pixels in x direction / size of CCD array in x direction
  • my = #of pixels in y direction / size of CCD array in y direction
  • ( x

, y

) is principal point offset in pixel dimensions

K =

mxf 0 x 0

0 my f y 0

x 0 = mxpx, y 0 = my py

Pinhole camera in general view

  • This is for the case when the camera’s optical axis is

aligned with the world z-axis

  • What if that is not the case?

Example

  • Translation by 10 units to the right 10 X Z [10, 0, 10]T

Pinhole camera in general view

  • In general, the camera center is at a rotation of R

T

, followed

by a translation of C from the world origin

  • Then World Axes Camera Axes C R
T

hx

hy

h

mxf 0 x 0 0

0 my f y 0 0

B

B

r 11 r 12 r 13 0

r 21 r 22 r 23 0

r 31 r 32 r 33 0

1 0 0 Cx

0 1 0 Cy

0 0 1 Cz

X

Y

Z

C

C

A

Camera Model Example

  • Think that the camera was

originally at the origin

looking down Z axis

  • Then it was translated by

( r

, r

, r

T

, rotated by φ along X , θ along Z, then

translated by

( x

, y

, z

T

  • This is the scenario in the

figure on right

Figure Reference: Gonzales and Woods, “Digital Image Processing”

Camera Model Example ⎥ ⎥ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎢ ⎢ ⎣ ⎡ 0 0 0 1 0 0 1 0 1 0 1 0 0 3 2 1 r r r ⎥ ⎥ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎢ ⎢ ⎣ ⎡ − 0 0 0 1 0 sin cos 0 0 cos sin 0 1 0 0 0

⎥ ⎥ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎢ ⎢ ⎣ ⎡ − 0 0 0 1 0 0 1 0 sin cos 0 0 cos sin 0 0

1 0 0 X 0 θ θ

0 1 0 Y 0 0 0 1 Z 0 0 0 0 1 ! "

$ % & & & & & mx f 0 x 0 0 0 my f y 0 0 0 0 1 0 ! "

$ % & & & &

⎥ ⎥ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎢ ⎢ ⎣ ⎡ − − − 0 0 0 1 0 0 1 0 1 0 1 0 0 3 2 1 r r r ⎥ ⎥ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎢ ⎢ ⎣ ⎡ − 0 0 0 1 0 sin cos 0 0 cos sin 0 1 0 0 0

⎥ ⎥ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎢ ⎢ ⎣ ⎡ − 0 0 0 1 0 0 1 0 sin cos 0 0 cos sin 0 0

⎥ ⎥ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎢ ⎢ ⎣ ⎡ − − − 0 0 0 1 0 0 1 0 1 0 1 0 0 0 0 0 Z Y X mx f 0 x 0 0 0 my f y 0 0 0 0 1 0 ! "

$ % & & & &

Aircraft Example

P =

mx f 0 x 0 0 0 my f y 0 0 0 0 1 0

cos ω 0 −sin ω 0 0 1 0 0 sin ω 0 cos ω 0 0 0 0 1

cos τ sin τ 0 0 −sin τ cos τ 0 0 0 0 1 0 0 0 0 1

cos ϕ 0 −sin ϕ 0 0 1 0 0 sin ϕ 0 cos ϕ 0 0 0 0 1

0 cos β sin β 0 0 −sin β cos β 0 0 0 0 1

cos α sin α 0 0 −sin α cos α 0 0 0 0 1 0 0 0 0 1

1 0 0 −Δ Tx 0 1 0 −Δ Ty 0 0 1 −Δ Tz 0 0 0 1

cameraMat = perspective_transform * gimbal_rotation_y * gimbal_rotation_z * gimbal_translation * vehicle_rotation_x * vehicle_rotation_y * vehicle_rotation_z * vehicle_translation ;

OTTER system_id
TV sensor_type
0001 serial_number
9.400008152666640300e+08 image_time
3.813193746469612200e+01 vehicle_latitude
-7.734523185193877700e+01 vehicle_longitude
9.949658409987658800e+02 vehicle_height
9.995171174441039900e-01 vehicle_pitch
1.701626418113209000e+00 vehicle_roll
1.207010551753029400e+02 vehicle_heading
1.658968732990974800e-02 camera_focal_length
-5.361314389557259100e+01 camera_elevation
-7.232969433546705000e+00 camera_scan_angle
480 number_image_lines
640 number_image_samples

c(1,1) = (cos(c_scn)cos(v_rll)-sin(c_scn)sin(v_pch)sin(v_rll))cos(v_hdg)-sin(c_scn)cos(v_pch)sin(v_hdg); c(1,2) = -(cos(c_scn)cos(v_rll)-sin(c_scn)sin(v_pch)sin(v_rll))sin(v_hdg)-sin(c_scn)cos(v_pch)cos(v_hdg); c(1,3) = -cos(c_scn)sin(v_rll)-sin(c_scn)sin(v_pch)cos(v_rll); c(1,4) = -((cos(c_scn)cos(v_rll)-sin(c_scn)sin(v_pch)sin(v_rll))cos(v_hdg)-sin(c_scn)cos(v_pch)sin(v_hdg))vx-(-(cos(c_scn)cos(v_rll)- sin(c_scn)sin(v_pch)sin(v_rll))sin(v_hdg)-sin(c_scn)cos(v_pch)cos(v_hdg))vy-(-cos(c_scn)sin(v_rll)-sin(c_scn)sin(v_pch)cos(v_rll))vz; c(2,1) = (-sin(c_elv)sin(c_scn)cos(v_rll)+(-sin(c_elv)cos(c_scn)sin(v_pch)+cos(c_elv)cos(v_pch))sin(v_rll))cos(v_hdg)+(-sin(c_elv)cos(c_scn)cos(v_pch)- cos(c_elv)sin(v_pch))sin(v_hdg); c(2,2) = -(-sin(c_elv)sin(c_scn)cos(v_rll)+(-sin(c_elv)cos(c_scn)sin(v_pch)+cos(c_elv)cos(v_pch))sin(v_rll))sin(v_hdg)+(-sin(c_elv)cos(c_scn)cos(v_pch)- cos(c_elv)sin(v_pch))cos(v_hdg); c(2,3) = sin(c_elv)sin(c_scn)sin(v_rll)+(-sin(c_elv)cos(c_scn)sin(v_pch)+cos(c_elv)cos(v_pch))cos(v_rll); c(2,4) = -((-sin(c_elv)sin(c_scn)cos(v_rll)+(-sin(c_elv)cos(c_scn)sin(v_pch)+cos(c_elv)cos(v_pch))sin(v_rll))cos(v_hdg)+(- sin(c_elv)cos(c_scn)cos(v_pch)-cos(c_elv)sin(v_pch))sin(v_hdg))vx-(-(-sin(c_elv)sin(c_scn)cos(v_rll)+(-sin(c_elv)cos(c_scn)sin(v_pch) +cos(c_elv)cos(v_pch))sin(v_rll))sin(v_hdg)+(-sin(c_elv)cos(c_scn)cos(v_pch)-cos(c_elv)sin(v_pch))cos(v_hdg))vy- (sin(c_elv)sin(c_scn)sin(v_rll)+(-sin(c_elv)cos(c_scn)sin(v_pch)+cos(c_elv)cos(v_pch))cos(v_rll))vz; c(3,1) = (cos(c_elv)sin(c_scn)cos(v_rll)+(cos(c_elv)cos(c_scn)sin(v_pch)+sin(c_elv)cos(v_pch))sin(v_rll))cos(v_hdg)+(cos(c_elv)cos(c_scn)cos(v_pch)- sin(c_elv)sin(v_pch))sin(v_hdg); c(3,2) = -(cos(c_elv)sin(c_scn)cos(v_rll)+(cos(c_elv)cos(c_scn)sin(v_pch)+sin(c_elv)cos(v_pch))sin(v_rll))sin(v_hdg)+(cos(c_elv)cos(c_scn)cos(v_pch)- sin(c_elv)sin(v_pch))cos(v_hdg); c(3,3) = -cos(c_elv)sin(c_scn)sin(v_rll)+(cos(c_elv)cos(c_scn)sin(v_pch)+sin(c_elv)cos(v_pch))cos(v_rll); c(3,4) = -((cos(c_elv)sin(c_scn)cos(v_rll)+(cos(c_elv)cos(c_scn)sin(v_pch)+sin(c_elv)cos(v_pch))sin(v_rll))cos(v_hdg)+(cos(c_elv)cos(c_scn)cos(v_pch)- sin(c_elv)sin(v_pch))sin(v_hdg))vx-(-(cos(c_elv)sin(c_scn)cos(v_rll)+(cos(c_elv)cos(c_scn)sin(v_pch)+sin(c_elv)cos(v_pch))sin(v_rll))sin(v_hdg) +(cos(c_elv)cos(c_scn)cos(v_pch)-sin(c_elv)sin(v_pch))cos(v_hdg))vy-(-cos(c_elv)sin(c_scn)sin(v_rll)+(cos(c_elv)cos(c_scn)sin(v_pch) +sin(c_elv)cos(v_pch))cos(v_rll))vz; c(4,1) = (1/flcos(c_elv)sin(c_scn)cos(v_rll)+(1/flcos(c_elv)cos(c_scn)sin(v_pch)+1/flsin(c_elv)cos(v_pch))sin(v_rll))cos(v_hdg)+(1/ flcos(c_elv)cos(c_scn)cos(v_pch)-1/flsin(c_elv)sin(v_pch))sin(v_hdg); c(4,2) = -(1/flcos(c_elv)sin(c_scn)cos(v_rll)+(1/flcos(c_elv)cos(c_scn)sin(v_pch)+1/flsin(c_elv)cos(v_pch))sin(v_rll))sin(v_hdg)+(1/ flcos(c_elv)cos(c_scn)cos(v_pch)-1/flsin(c_elv)sin(v_pch))cos(v_hdg); c(4,3) = -1/flcos(c_elv)sin(c_scn)sin(v_rll)+(1/flcos(c_elv)cos(c_scn)sin(v_pch)+1/flsin(c_elv)cos(v_pch))cos(v_rll); c(4,4) = -((1/flcos(c_elv)sin(c_scn)cos(v_rll)+(1/flcos(c_elv)cos(c_scn)sin(v_pch)+1/flsin(c_elv)cos(v_pch))sin(v_rll))cos(v_hdg)+(1/ flcos(c_elv)cos(c_scn)cos(v_pch)-1/flsin(c_elv)sin(v_pch))sin(v_hdg))vx-(-(1/flcos(c_elv)sin(c_scn)cos(v_rll)+(1/ flcos(c_elv)cos(c_scn)sin(v_pch)+1/flsin(c_elv)cos(v_pch))sin(v_rll))sin(v_hdg)+(1/flcos(c_elv)cos(c_scn)cos(v_pch)-1/ flsin(c_elv)sin(v_pch))cos(v_hdg))vy-(-1/flcos(c_elv)sin(c_scn)sin(v_rll)+(1/flcos(c_elv)cos(c_scn)sin(v_pch)+1/ flsin(c_elv)cos(v_pch))cos(v_rll))vz+1;

Summary: Perspective Camera Model

• The perspective camera model can be written as

K =

mxf 0 x 0

0 my f y 0

x 0 = mxpx, y 0 = my py

3x3 matrix of internal camera parameters (intrinsic parameters) 3x4 matrix of external camera parameters (extrinsic parameters)

World point 2 P

3 Image point

2 P

2

x = K

h

R | R

C

i

X