High-speed videography using a dense camera array

合集下载

High-speed Object Tracking Based on Temporally Evaluated Optical Flow

High-speed Object Tracking Based on Temporally Evaluated Optical Flow

High-speed Object Tracking Based on Temporally Evaluated Optical Flow Nobuhiro Kondoh Ryuzo Okada Junji Oaki Daisuke Yamamoto Hiroshi MiyazakiKouki Uesugi Jiro Amemiya Kenji Shirakawa Atsushi KunimatsuWe demonstrate an active camera system for high-speed object tracking using a high-speed camera that can take an image sequence at a rate of 1000frame/s.By making effective use of properties of high-frame-rate image sequences,our tracking algorithm enables us to robustly track a fast moving object in a typical indoor environment that contains a cluttered background under ordinary illumination,such as fluorescent lights.In such a high frame rate,it is difficult to capture images that have a sufficient dynamic range for image processing because of a restricted exposure time of the high-frame-rate camera.Thus,strong light sources are necessary for the existing high-speed object tracking systems[1].In order to overcome this problem,we developed a high-sensitivity high-frame-rate visual sensor system.Figure 1shows a typical tracking result of a fast moving object.The target is moving so fast that the target is blurred in the video-rate images and is not clearly seen.In contrast,the target is clearly seen in the high-frame-rate images.Our system is,thus,capable of distinguishing the target from the cluttered background based on our tracking algorithm,and successfully tracks the target.In the following sections,we describe our demonstration system,the real-time object tracking algorithm,and the high-sensitivity high-frame rate sensor system.System OverviewOur system consists of a “camera platform”mounting a “high-speed image sensor board”and a “video camera”and a “PC for image processing”and a “PC for motor control”(see Figure 2).The “high-speed image sensor board”captures a low-sensitivity image at a rate of 1000[fps].The low-sensitivity image is enhanced by the “image capture/processing board”in the “PC for image processing”to be a high-sensitivity image based on the method described later.In order to realize image processing at a rate of 1000[fps],a dual-CPU PC is used for the “PC for image processing.”A process which reads the high-sensitivity images from the “image capture/processing board”executes on one CPU and our tracking algorithm executes on the other one.The position and the motion of the target are sent to the “PC for motor control”,which controls the angle of pan and tilt so that the target is at the center of the field of view.Note that the “video camera”on the “camera platform”is used for providing a real-time reference view in order to compare the quality of an image sequence at a video rate with that of an image sequence at a rate of 1000[fps].TrackingAlgorithm(a)High-frame-rate image with tracking result (’+’marks represent corner points beingtracked.The white rectangle is a target region that circumscribes the corner points.Flicker compensation is performed in a rectangular region where brightness differsfrom that of other areas.)(b)video-rate image.Figure 1.Tracking of a fast moving object.1PC for Image Processing High-SpeedImage Sensor Board Camera Platform Video Camera PCI Image Capture/Processing Board(a)Exterior of the system.(b)Block diagram.Figure 2.Our high-speed object tracking system.(a)Enhanced image.(b)Original image.Figure 3.Enhanced imageIn order to distinguish a target object from a cluttered background,motion information,which is optical flow,is useful.Motions in a high-frame-rate image sequence are much smaller than those in a video-rate image sequence.By making effective use of this property,we estimate robust optical flow in a high-frame-rate image sequence,which we call ”temporally evaluated optical flow”,or simply ”T-Flow”[2].We present a robust tracking method based on T-Flow.First,the fluctuation of image intensities caused by the flicker of the fluorescent lights is eliminated.Next,assuming that there are a lot of feature (corner)points on the surface of the target object,feature points are detected in the target region,and T-Flow is extracted for the feature points.Then,motion and position of the target object are estimated based on the optical flow information.Finally,useless feature points are removed and the target region is updated.Initial target region is determined to be a moving object first coming into the field of view assuming that the camera platform is initially stationary and that there are no moving objects in the field of view.High-sensitivity high-frame-rate visual sensor systemIn order to improve the sensitivity of the high-frame-rate sensor,we increase an exposure area by making a pseudo pixel whose area is enlarged by adding the image intensities of adjacent pixels.Furthermore,adding the image intensities reduces a spatially independent noise on the original image intensities.We compute the sum of image intensities of adjacent 4x4pixels.The resolution of an enhanced image obtained by our system is 320x128pixels because the resolution of an original image captured by the CMOS image sensor [3]is 1280x512pixels at 1000[fps].Images obtained by this improvement procedure are shown in Figure 3.Although these images are captured in an office under ordinary fluorescent lights at night,the dynamic range is improved and the interior in the office is clearly seen.In contrast,the dynamic range of an original image is insufficient for recognizing the interior.References[1]I.Ishii,Y .Nakabo,and M.Ishikawa,”Target Tracking Algorithm for 1ms Visual Feedback System Using Massively Parallel Process-ing,”Proc.of ICRA ,pp.2309-2314,1996.[2]R.Okada,A.Maki,Y .Taniguchi,and K.Onoguchi,”Temporally Evaluated Optical Flow:Study on Accuracy,”Proc of ICPR ,2002.[3]Micron Technology,Inc.http://www.micron .com/imaging/Products/MI-MV13.html2。

索尼小型全帧镜头镜头说明书

索尼小型全帧镜头镜头说明书

Key FeaturesA new frame of mind.No other full frame, interchangeable-lens camera is this light or this portable. 24.3 MP of rich detail. A true-to-life 2.4 million dot OLED viewfinder. Wi-Fi sharing and an expandable shoe system. It’s all the full-frame performance you ever wanted in a compact size that will change your perspective entirely.World’s smallest lightest interchangeable lens full-frame cameraSony’s Exmor image sensor takes full advantage of the Full-frame format, but in a camera body less than half the size and weight of a full-frame DSLR.Full Frame 24.3 MP resolution with 14-bit RAW outputA whole new world of high-quality images are realized through the 24.3 MP effective 35 mm full-frame sensor, a normal sensor range of ISO 100 – 25600, and a sophisticated balance of high resolving power, gradation and low noise. The BIONZ® X image processor enables up to 5 fps high-speed continuous shooting and 14-bit RAW image data recording.Fast Hybrid AF w/ phase-detection for DSLR-like focusing speedEnhanced Fast Hybrid auto focus combines speedy phase-detection AF with highly accurate contrast-detection AF , which has been accelerated through a new Spatial Object Detection algorithm, to achieve among the fastest autofocusing performance of any full-frame camera. First, phase-detection AF with 117 densely placed phase-detection AF points swiftly and efficiently moves the lens to bring the subject nearly into focus. Then contrast-detection AF with wide AF coverage fine-tunes the focusing in the blink of an eye.Fast Intelligent AF for responsive, accurate, and greater operability with full frame sensorThe high-speed image processing engine and improved algorithms combine with optimized image sensor read-out speed to achieve ultra high-speed AF despite the use of a full-frame sensor.New Eye AF controlEven when capturing a subject partially turned away from the camera with a shallow depth of field, the face will be sharply focused thanks to extremely accurate eye detection that can prioritize a single pupil. A green frame appears over the prioritized eye when focus has been achieved for easy confirmation. Eye AF can be used when the function is assigned to a customizable button, allowing users to instantly activate it depending on the scene.Fully compatible with Sony’s E-mount lens system and new full-frame lensesTo take advantage of the lightweight on-the-go body, the α7 is fully compatible with Sony’s E-mount lens system and expanded line of E-mount compact and lightweight full-frame lenses from Carl Zeiss and Sony’s premier G-series.Direct access interface for fast, intuitive shooting controlQuick Navi Pro displays all major shooting options on the LCD screen so you can rapidly confirm settings and make adjustments as desired without searching through dedicated menus. When fleeting shooting opportunities arise, you’ll be able to respond swiftly with just the right settings.High contrast 2.4M dot OLED EVF for eye-level framingView every scene in rich detail with the XGA OLED Tru-Finder, which features OLED improvements and the same 3-lens optical system used in the flagship α99. The viewfinder faithfully displays what will appear in your recording, including the effects of your camera settings, so you can accurately monitor the results. You’ll enjoy rich tonal gradations and 3 times the contrast of the α99. High-end features like 100% frame coverage and a wide viewing angle are also provided.3.0" 1.23M dot LCD tilts for high and low angle framingILCE-7K/Ba7 (Alpha 7) Interchangeable Lens CameraNo other full frame, interchangeable-lens camera is this light or this portable. 24.3 MP of rich detail. A true-to-life 2.4 million dot OLED viewfinder. Wi-Fi ® sharing and an expandable shoe system. It’s all the full-frame performance you ever wanted in a compact size that will change your perspective entirely.The tiltable 3.0” (1,229k dots) Xtra Fine™ LCD Display makes it easy to photograph over crowds or low to capture pets eye to eye by swinging up approx. 84° and down approx. 45°. Easily scroll through menus and preview life thanks to WhiteMagic™ technology that dramatically increases visibility in bright daylight. The large display delivers brilliant-quality still images and movies while enabling easy focusing operation.Simple connectivity to smartphones via Wi-Fi® or NFCConnectivity with smartphones for One-touch sharing/One-touch remote has been simplified with Wi-Fi®/NFC control. In addition to Wi-Fi support for connecting to smartphones, the α7 also supports NFC (near field communication) providing “one touch connection” convenience when transferring images to Android™ smartphones and tablets. Users need only touch devices to connect; no complex set-up is required. Moreover, when using Smart Remote Control — a feature that allows shutter release to be controlled by a smartphone — connection to the smartphone can be established by simply touching compatible devices.New BIONZ X image processing engineSony proudly introduces the new BIONZ X image processing engine, which faithfully reproduces textures and details in real time, as seen by the naked eye, via extra high-speed processing capabilities. Together with front-end LSI (large scale integration) that accelerates processing in the earliest stages, it enables more natural details, more realistic images, richer tonal gradations and lower noise whether you shoot still images or movies.Full HD movie at 24p/60i/60p w/uncompressed HDMI outputCapture Full 1920 x 1080 HD uncompressed clean-screen video files to external recording devices via an HDMI® connection in 60p and 60i frame-rates. Selectable in-camera A VCHD™ codec frames rates include super-smooth 60p, standard 60i or cinematic 24p. MP4 codec is also available for smaller files for easier upload to the web.Up to 5 fps shooting to capture the decisive momentWhen your subject is moving fast, you can capture the decisive moment with clarity and precision by shooting at speeds up to 5 frames per second. New faster, more accurate AF tracking, made possible by Fast Hybrid AF, uses powerful predictive algorithms and subject recognition technology to track every move with greater speed and precision. PlayMemories™ Camera Apps allows feature upgradesPersonalize your camera by adding new features of your choice with PlayMemories Camera Apps. Find apps to fit your shooting style from portraits, detailed close-ups, sports, time lapse, motion shot and much more. Use apps that shoot, share and save photos using Wi-Fi that make it easy to control and view your camera from smartphone, and post photos directly to Facebook or backup images to the cloud without connecting to a computer.114K Still image output by HDMI8 or Wi-Fi for viewing on 4K TVsEnjoy Ultra High Definition slide shows directly from the camera to a compatible 4K television. The α7 converts images for optimized 4K image size playback (8MP). Enjoy expressive rich colors and amazing detail like never before. Images can be viewed via an optional HDMI or WiFi.Vertical Grip CapableEnjoy long hours of comfortable operation in the vertical orientation with this sure vertical grip, which can hold two batteries for longer shooting and features dust and moisture protection.Mount AdaptorsBoth of these 35mm full-frame compatible adaptors let you mount the α7R with any A-mount lens. The LA-EA4 additionally features a built-in AF motor, aperture-drive mechanism and Translucent Mirror Technology to enable continuous phase-detection AF. Both adaptors also feature a tripod hole that allows mounting of a tripod to support large A-mount lenses.Specifications1. Among interchangeable-lens cameras with an full frame sensor as of October 20132. Records in up to 29 minute segments.3. 99 points when an APS-C lens compatible with Fast Hybrid AF is mounted.7. Actual performance varies based on settings, environmental conditions, and usage. Battery capacity decreases over time and use.8. Requires compatible BRA VIA HDTV and cable sold separately.9. Auto Focus function available with Sony E-Mount lenses and Sony A-mount SSM and SAM series lenses when using LA-EA2/EA4 lens adaptor.。

海康威视32路IP摄像头系统说明书

海康威视32路IP摄像头系统说明书

Key Feature● Up to 32-ch IP camera inputs, plug & play with 16 power-over-Ethernet (PoE) interfaces● H.265+/H.265/H.264+/H.264 video formats● Up to 2-ch@12 MP or 3-ch@8 MP or 6-ch@4 MP or 12-ch@1080p decoding capacity● Up to 256 Mbps incoming bandwidth● Adopt Hikvision Acusense technology to minimize manual effortand security costsSmart Function● All channels support Motion Detection 2.0● 2-ch video analysis for human and vehicle recognition to reduce false alarm ● 1-ch facial recognition for video stream, or 4-ch facial recognition for face picture● Smart search for the selected area in the video, and smart playback to improve the playback efficiencyProfessional and Reliability● H.265+ compression effectively reduces the storage space by up to 75%● Adopt stream over TLS encryption technology which provides more secure stream transmission serviceHD Video Output● Provide independent HDMI and VGA outputs ● HDMI video output at up to 4K resolutionStorage and Playback● Up to 2 SATA interfaces for HDD connection (up to 10 TB capacity per HDD)● 16-ch synchronous playbackNetwork & Ethernet Access● 16 independent PoE network interfaces● 1 self-adaptive 10/100/1000 Mbps Ethernet interface ● Hik-Connect for easy network managementSpecificationIntelligent AnalyticsAI by Device Facial recognition, perimeter protection, motion detection 2.0AI by Camera Facial recognition, perimeter protection, throwing objects from building, motion detection2.0, ANPR, VCAFacial RecognitionFacial Detection and Analytics Face picture comparison, human face capture, face picture searchFace Picture Library Up to 16 face picture libraries, with up to 20,000 face pictures in total (each picture ≤ 4 MB, total capacity ≤ 1 GB)Facial Detection and Analytics Performance1-ch, 8 MP Face Picture Comparison4-ch Motion Detection 2.0By Device All channels, 4 MP (when enhanced SVC mode is enabled, up to 8 MP) video analysis for human and vehicle recognition to reduce false alarmBy Camera All channels Perimeter ProtectionBy Device 2-ch, 4 MP (HD network camera, H.264/H.265) video analysis for human and vehicle recognition to reduce false alarmBy Camera All channels Video and AudioIP Video Input32-ch Incoming Bandwidth256 Mbps Outgoing Bandwidth160 MbpsHDMI Output 1-ch, 4K (3840 × 2160)/30 Hz, 2K (2560 × 1440)/60 Hz, 1920 × 1080/60 Hz, 1600 × 1200/60 Hz, 1280 × 1024/60 Hz, 1280 × 720/60 Hz, 1024 × 768/60 HzVGA Output1-ch, 1920 × 1080/60 Hz, 1280 × 1024/60 Hz, 1280 × 720/60 HzVideo Output Mode HDMI/VGA independent outputCVBS Output N/AAudio Output1-ch, RCA (2.0 Vp-p, 1 KΩ, using the audio input)Two-Way Audio1-ch, RCA (Linear, 1 KΩ)DecodingDecoding Format H.265/H.265+/H.264+/H.264Recording Resolution12 MP/8 MP/6 MP/5 MP/4 MP/3 MP/1080p/UXGA/720p/VGA/4CIF/DCIF/2CIF/CIF/QCIF Synchronous playback16-chDecoding Capability AI on: 1-ch@12 MP (30 fps)/2-ch@8 MP (30 fps)/4-ch@4 MP (30 fps)/8-ch@1080p (30 fps)AI off: 2-ch@12 MP (30 fps)/3-ch@8 MP (30 fps)/6-ch@4 MP (30 fps)/12-ch@1080p (30 fps)Stream Type Video, Video & AudioAudio Compression G.711ulaw/G.711alaw/G.722/G.726/AACNetworkRemote Connection128API ONVIF (profile S/G); SDK; ISAPICompatible Browser IE11, Chrome V57, Firefox V52, Safari V12, Edge V89, or above versionNetwork Protocol TCP/IP, DHCP, IPv4, IPv6, DNS, DDNS, NTP, RTSP, SADP, SMTP, SNMP, NFS, iSCSI, ISUP, UPnP™, HTTP, HTTPSNetwork Interface 1 RJ-45 10/100/1000 Mbps self-adaptive Ethernet interface PoEInterface16, RJ-45 10/100 Mbps self-adaptive Ethernet interface Power≤ 200 WStandard IEEE 802.3af/atAuxiliary InterfaceSATA 2 SATA interfacesCapacity Up to 10 TB capacity for each HDDUSB Interface Front panel: 1 × USB 2.0; Rear panel: 1 × USB 2.0Alarm In/Out4/1GeneralGUI Language English, Russian, Bulgarian, Hungarian, Greek, German, Italian, Czech, Slovak, French, Polish, Dutch, Portuguese, Spanish, Romanian, Turkish, Japanese, Danish, Swedish Language, Norwegian, Finnish, Korean, Traditional Chinese, Thai, Estonian, Vietnamese, Croatian, Slovenian, Serbian, Latvian, Lithuanian, Uzbek, Kazakh, Arabic, Ukrainian, Kyrgyz , Brazilian Portuguese, IndonesianPower Supply100 to 240 VAC, 50 to 60 HzConsumption≤ 15 W (without HDD and PoE off)Working Temperature-10 °C to 55 °C (14 °F to 131 °F)Working Humidity10% to 90%Dimension (W × D × H)385 mm × 315 mm × 52 mm (15.2"× 12.4" × 2.0") Weight≤ 3 kg (without HDD, 6.6 lb.)CertificationFCC Part 15 Subpart B, ANSI C63.4-2014CE EN 55032: 2015, EN 61000-3-2, EN 61000-3-3, EN 50130-4, EN 55035: 2017 Obtained Certification CE, FCC, IC, CB, KC, UL, Rohs, Reach, WEEE, RCM, UKCA, LOA, BISNote:Facial recognition, motion detection 2.0 or perimeter protection cannot be enabled at the same time.DimensionPhysical InterfaceNo.Description No.Description 1Network interfaces with PoE function7VGA interface 2LAN network interface8USB interface 3AUDIO IN9GND4AUDIO OUT10Power supply 5ALARM IN and OUT11Power switch 6HDMI interfaceAvailable ModelDS-7632NXI-K2/16P。

合格用FLEXIDOME IP indoor 5000 HD安防摄像头说明书

合格用FLEXIDOME IP indoor 5000 HD安防摄像头说明书

u1080p resolution for sharp imagesu Easy to install with auto zoom/focus lens, wizardand pre-configured modesu Fully configurable quad streamingu IR version with 15 m (50 ft) viewing distanceu Regions of interest and E-PTZThe HD indoor dome cameras from Bosch areprofessional surveillance cameras that provide highquality HD images for demanding security andsurveillance network requirements. These domes aretrue day/night cameras offering excellent performanceday or night.There is a version with a built-in active infraredilluminator that provides high performance in extremelow-light environments.System overviewEasy to install stylish indoor domeIdeal for indoor use, the stylish design is suitable forinstallations where appearance and flexible coverageare important. The varifocal lens allows you to choosethe coverage area to best suit your application. Usingthe proprietary pan/tilt/rotation mechanism, installerscan select the exact field of view. Mounting optionsare numerous, including surface, wall, and suspended-ceiling mounting.The automatic zoom/focus lens wizard makes it easyfor an installer to accurately zoom and focus thecamera for both day and night operation. The wizard isactivated from the PC or from the on-board camerapush button making it easy to choose the workflowthat suits best.The AVF (Automatic Varifocal) feature means that thezoom can be changed without opening the camera.The automatic motorized zoom/focus adjustment with1:1 pixel mapping ensures the camera is alwaysaccurately focused.FunctionsIntelligent Dynamic Noise Reduction reducesbandwidth and storage requirementsThe camera uses Intelligent Dynamic Noise Reductionwhich actively analyzes the contents of a scene andreduces noise artifacts accordingly.The low-noise image and the efficient H.264compression technology provide clear images whilereducing bandwidth and storage by up to 50%compared to other H.264 cameras. This results inreduced-bandwidth streams that still retain a highimage quality and smooth motion. The cameraprovides the most usable image possible by cleverlyoptimizing the detail-to-bandwidth ratio.Area-based encodingArea-based encoding is another feature which reduces bandwidth. Compression parameters for up to eight user-definable regions can be set. This allows uninteresting regions to be highly compressed, leaving more bandwidth for important parts of the scene. Bitrate optimized profileThe average typical optimized bandwidth in kbits/s for various image rates is shown in the table:Multiple streamsThe innovative multi-streaming feature delivers various H.264 streams together with an M‑JPEG stream. These streams facilitate bandwidth-efficient viewing and recording as well as integration with third-party video management systems.Depending on the resolution and frame rate selected for the first stream, the second stream provides a copy of the first stream or a lower resolution stream.The third stream uses the I-frames of the first stream for recording; the fourth stream shows a JPEG image at a maximum of 10 Mbit/s.Regions of interest and E-PTZRegions of Interest (ROI) can be user defined. The remote E-PTZ (Electronic Pan, Tilt and Zoom) controls allow you to select specific areas of the parent image. These regions produce separate streams for remote viewing and recording. These streams, together with the main stream, allow the operator to separately monitor the most interesting part of a scene while still retaining situational awareness.Built-in microphone, two-way audio and audio alarm The camera has a built-in microphone to allow operators to listen in on the monitored area. Two-way audio allows the operator to communicate with visitors or intruders via an external audio line input and output. Audio detection can be used to generate an alarm if needed.If required by local laws, the microphone can be permanently blocked via a secure license key. Tamper and motion detectionA wide range of configuration options is available for alarms signaling camera tampering. A built-in algorithm for detecting movement in the video can also be used for alarm signaling.Storage managementRecording management can be controlled by the Bosch Video Recording Manager (VRM) or the camera can use iSCSI targets directly without any recording software.Edge recordingThe MicroSD card slot supports up to 2 TB of storage capacity. A microSD card can be used for local alarm recording. Pre-alarm recording in RAM reduces recording bandwidth on the network, or — if microSD card recording is used — extends the effective life of the storage medium.Cloud-based servicesThe camera supports time-based or alarm-based JPEG posting to four different accounts. These accounts can address FTP servers or cloud-based storage facilities (for example, Dropbox). Video clips or JPEG images can also be exported to these accounts.Alarms can be set up to trigger an e-mail or SMS notification so you are always aware of abnormal events.Easy installationPower for the camera can be supplied via a Power-over-Ethernet compliant network cable connection. With this configuration, only a single cable connection is required to view, power, and control the camera. Using PoE makes installation easier and more cost-effective, as cameras do not require a local power source.The camera can also be supplied with power from+12 VDC power supplies.For trouble-free network cabling, the camera supports Auto-MDIX which allows the use of straight or cross-over cables.True day/night switchingThe camera incorporates mechanical filter technology for vivid daytime color and exceptional night-time imaging while maintaining sharp focus under all lighting conditions.Hybrid modeAn analog video output enables the camera to operate in hybrid mode. This mode provides simultaneous high resolution HD video streaming and an analog video output via an SMB connector. The hybrid functionality offers an easy migration path from legacy CCTV to a modern IP-based system.Access securityPassword protection with three levels and 802.1x authentication is supported. To secure Web browser access, use HTTPS with a SSL certificate stored in the camera.Complete viewing softwareThere are many ways to access the camera’s features: using a web browser, with the Bosch Video Management System, with the free-of-chargeBosch Video Client or Video Security Client, with the video security mobile app, or via third-party software. Video security appThe Bosch video security mobile app has been developed to enable Anywhere access to HD surveillance images allowing you to view live images from any location. The app is designed to give you complete control of all your cameras, from panning and tilting to zoom and focus functions. It’s like taking your control room with you.This app, together with the separately available Bosch transcoder, will allow you to fully utilize our dynamic transcoding features so you can play back images even over low-bandwidth connections.System integrationThe camera conforms to the ONVIF Profile S, ONVIF Profile Q and ONVIF Profile G specifications. Compliance with these standards guarantees interoperability between network video products regardless of manufacturer.Third-party integrators can easily access the internal feature set of the camera for integration into large projects. Visit the Bosch Integration Partner Program (IPP) website () for more information.HD standardsComplies with the SMPTE 274M-2008 Standard in:–Resolution: 1920x1080–Scan: Progressive–Color representation: complies with ITU-R BT.709–Aspect ratio: 16:9–Frame rate: 25 and 30 frames/sComplies with the SMPTE 296M-2001 Standard in:–Resolution: 1280x720–Scan: Progressive–Color representation: complies with ITU-R BT.709–Aspect ratio: 16:9–Frame rate: 25 and 30 frames/sInstallation/configuration notesDimensions mm (inch)Parts included•Camera•Screw kit•Installation documentation Technical specificationsSensitivity – (3200K, reflectivity 89%, F1.3, 30IRE)Ordering informationFLEXIDOME IP indoor 5000 HDProfessional IP dome camera for indoor HD surveillance. Varifocal 3 to 10 mm f1.3 lens; IDNR; day/ night; H.264 quad-streaming; cloud services; motion/ tamper/audio detection; microphone; 1080pOrder number NIN-51022-V3FLEXIDOME IP indoor 5000 IRProfessional IP dome camera for indoor HD surveillance. Varifocal 3 to 10 mm f1.3 lens; IDNR; day/ night; H.264 quad-streaming; cloud services; motion/ tamper/audio detection; microphone; 1080p; infrared Order number NII-51022-V3FLEXIDOME IP indoor 5000 HDProfessional IP dome camera for indoor HD surveillance. Automatic Varifocal 3 to 10 mm f1.3 lens; DC iris; IDNR; day/night; H.264 quad-streaming; cloud services; motion/tamper/audio detection; microphone; 1080pOrder number NIN-50022-A3FLEXIDOME IP indoor 5000 IRProfessional IP dome camera for indoor HD surveillance. Automatic Varifocal 3 to 10 mm f1.3 lens; DC iris; IDNR; day/night; H.264 quad-streaming; cloud services; motion/tamper/audio detection; microphone; 1080p; infraredOrder number NII-50022-A3AccessoriesNDA-LWMT-DOME Dome Wall MountSturdy wall L-shaped bracket for dome cameras Order number NDA-LWMT-DOMENDA-ADTVEZ-DOME Dome Adapter BracketAdapter bracket (used together with appropriate wall or pipe mount, or surface mount box)Order number NDA-ADTVEZ-DOMEVEZ-A2-WW Wall MountWall mount (Ø145/149 mm) for dome cameras (use together with appropriate dome adapter bracket); whiteOrder number VEZ-A2-WWVEZ-A2-PW Pipe MountPendant pipe mount (Ø145/149 mm) for dome cameras (use together with appropriate dome adapter bracket); whiteOrder number VEZ-A2-PWLTC 9213/01 Pole Mount AdapterFlexible pole mount adapter for camera mounts (use together with the appropriate wall mount bracket). Max. 9 kg (20 lb); 3 to 15 inch diameter pole; stainless steel strapsOrder number LTC 9213/01NDA-FMT-DOME In-ceiling mountIn-ceiling flush mounting kit for dome cameras(Ø157 mm)Order number NDA-FMT-DOMENDA-ADT4S-MINDOME 4S Surface Mount BoxSurface mount box (Ø145 mm / Ø5.71 in) for dome cameras (use together with the appropriate dome adapter bracket).Order number NDA-ADT4S-MINDOMEMonitor/DVR Cable SMB 0.3M0.3 m (1 ft) analog cable, SMB (female) to BNC (female) to connect camera to coaxial cableOrder number NBN-MCSMB-03MMonitor/DVR Cable SMB 3.0M3 m (9 ft) analog cable, SMB (female) to BNC (male) to connect camera to monitor or DVROrder number NBN-MCSMB-30MNPD-5001-POE Midspan PoE InjectorPower-over-Ethernet midspan injector for use with PoE enabled cameras; 15.4 W, 1-portOrder number NPD-5001-POENPD-5004-POE Midspan PoE InjectorPower-over-Ethernet midspan injectors for use with PoE enabled cameras; 15.4 W, 4-portsOrder number NPD-5004-POERepresented by:North America:Europe, Middle East, Africa:Asia-Pacific:China:Latin America and Caribbean:Bosch Security Systems, Inc. 130 Perinton Parkway Fairport, New York, 14450, USA Phone: +1 800 289 0096 Fax: +1 585 223 9180***********************.com Bosch Security Systems B.V.P.O. Box 800025617 BA Eindhoven, The NetherlandsPhone: + 31 40 2577 284Fax: +31 40 2577 330******************************Robert Bosch (SEA) Pte Ltd, SecuritySystems11 Bishan Street 21Singapore 573943Phone: +65 6571 2808Fax: +65 6571 2699*****************************Bosch (Shanghai) Security Systems Ltd.203 Building, No. 333 Fuquan RoadNorth IBPChangning District, Shanghai200335 ChinaPhone +86 21 22181111Fax: +86 21 22182398Robert Bosch Ltda Security Systems DivisionVia Anhanguera, Km 98CEP 13065-900Campinas, Sao Paulo, BrazilPhone: +55 19 2103 2860Fax: +55 19 2103 2862*****************************© Bosch Security Systems 2016 | Data subject to change without notice 188****8507|en,V9,01.Jun2016。

高速摄像机产品规格说明书

高速摄像机产品规格说明书

High Speed VideoRange Overview i-SPEED LT i-SPEED 2i-SPEED TR i-SPEED 3i-SPEED FSResolution (full sensor)800 x 600 pixels800 x 600 pixels1280 x 1024 pixels1280 x 1024 pixels1280 x 1024 pixelsSpeed at full resolution1,000 fps1,000 fps2,000 fps2,000 fps2,000 fpsMaximum recording speed2,000 fps33,000 fps10,000 fps150,000 fps1,000,000 fpsShutterUser selectable to 5microsecondsUser selectable to 5microsecondsUser selectable to 2.14microsecondsUser selectable to 1microsecondUser selectable to 200nanosecondsInternal memory options 1 GB/2 GB/4 GB 2 GB/4 GB 4 GB/8 GB/16 GB 4 GB/8 GB/16 GB 4 GB/8 GB/16 GBLens mount C-mount C-mount F-mount F-mount F-mountCDU compatibility✓✓✓✓✓CF card storage compression✓✓✓✓✓Ethernet connection✗✓✓✓✓Multiple camerasynchronisation✗✓✓✓✓Text/logo overlay✗✗✓✓✓User settings✗✗✓✓✓Battery backup✗✗Optional✓✓i-FOCUS✗✗✓✓✓i-CHEQ✗✗✓✓✓HiG options✗✓✗✓✓IRIG-B✗✗✗✗✓Ecomomy modes3939 + manual9 + manualSee individual camera features and specification sheets for full product details.High Speed, High Quality Imaging specificationsAdvanced Test Equipment Rentals 800-404-ATEC (2832)®E s t a b l i s h e d1981With years of experience in high quality digital image processing, the i-SPEED product range from Olympus offers high speed video cameras which are suitable for numerous applications,including: Automotive Crash Testing, Research and Development, Production, Fault Diagnosis, Bottling and Packaging, Pharmaceutical, Manufacturing, Component Testing, Ballistic and Broadcast industries.Olympus is a world leading manufacturer of imaging products with a long history ofproducing high quality systems, providing solutions within a variety of industrial applications. The Olympus i-SPEED high speed video range is no exception,whatever the high speed application, industry or specialist requirements,Olympus has a high speed camera for you. Multiple camera synchronisation High resolution at higher frame speedsOn board image measurementPortable and easy to useHigh speedelectronic shutteringhigh speed videoAdvanced Test Equipment Rentals 800-404-ATEC (2832)®E s t a b l i s h e d 1981The i-SPEED LT has been designed to be quick to set up and simple to use. With the ability of instant video playback, it is a complete ‘point and shoot’ inspection tool.The i-SPEED 2is an invaluable tool for general research anddevelopment requirements,with recording rates of up to 33,000 fps and instant playback & analysis via the CDU.2,000 fps max33,000 fps maxCONNECTION TO FIBERSCOPE, BORESCOPE OR MICROSCOPE With over 30 years experience within the Remote Visual Inspection industry, the Olympus i-SPEED camera has been optimised for use with Olympus fiberscopes, rigid borescopes and microscopes.DATA CHANNELSMulti-channel analogue data input from 0-5 V provides graphical representation synchronised exactly with the captured video.MULTIPLE CAMERA SYNCHRONISATIONBy synchronising two or more i-SPEED cameras, multiple views of the same event can be obtained and downloaded via Ethernet for review.ETHERNET CONNECTION The Olympus i-SPEED 2can be connected to a PC via Ethernet allowing full camera control and image download.DOWNLOAD VIA COMPACT FLASHBy selecting frames required for review via the CDU,download times are reduced. The video clip is easily transferred from the camera’s internal memory to a removable Compact Flash card in either compressed or uncompressed format for transfer to a PC.CONTROLLER DISPLAY UNIT (CDU)The unique CDU facilitates the operation of the i-SPEED high speed video camera through an intuitive menu structure, without the need of a PC, making the system portable, easy to use and also provides instant playback.Thei-SPEED TR provides highresolution, extreme low light sensitivity at speeds up to 10,000 fpsrecording, making it the ideal analysis toolfor research and development.USER SETTINGS User settings can be stored through the camera and allows users to store up to five favouredcamera settings or test parameters, which can be easily restored.i-FOCUSA feature unique to Olympus, i-FOCUS is an electronic image focusing tool which provides confirmation of focus within a live image and provides a visual indication of depth of field.LUMINANCE HISTOGRAMProvides graphical representation of the average brightness within a live image, allowing easy aperture set up in real time.The i-SPEED 3has been designed to an advanced specificationproviding high frame ratecapture, on board image analysistools and electronic shuttering to 1 μs.BATTERY BACK UP i-SPEED 3has an internal battery back up, to ensure camera operation, should AC power fail.VIDEO TRIGGER* An advancedtriggering function that begins the recording process when movement occurs within a defined area of a live image.*Triggering utilises changes in the luminance of the imageVIDEO CLIP PERSONALISATIONText and company logos can be permanently burnt into video clips to assist with report generation and video accreditation.MEASUREMENTAccessed via the CDU, distance,angle, velocity, angular velocity,acceleration and angular acceleration measurements can be calculated,allowing instant analysis of captured images.IMAGE PROCESSINGCaptured footage can be processed via the CDU to enhance an image and identify detail that would not otherwise be seen.10,000 fps max150,000 fps maxThe i-SPEED Software Suite is designed to mirror the ease-of-use and high specificationpower of the camera range. There are three levels of PC software available for use with all cameras:ControlControl is supplied as standard with all i-SPEED cameras (excluding LT model)Connect to camera*Control camera*Download images to PC Manual distance, speed, angle,angular velocity measurements Additional functionality available when using Control with i-SPEED LT/3/FSi-FOCUS for confirmation of depth of focusi-CHEQ for instant camera status determinationLuminance histogram for precise image set-upControl-ProBy upgrading to Control-Pro as an optional extra, additional features to the Control software are availableAuto Capture and download to PCFree text facility including data and frame comments Video annotation HTML report generator 64 point auto-tracking Perspective projection Data filtering Video triggeringLens distortion correction and saving Permanent text burned onto video,including customer’s logosi-SPEED ViewerTo allow i-SPEED footage to be reviewed within an organisation (in addition to the Control andControl-Pro PC Software Suites) a simple Viewer software is available as a free of charge download from the Olympus website (). This offers the capability to view i-SPEED footage and change playback speed only.*i-SPEED LT is not Ethernet enabled. i-SPEED Software Suite may be purchased to enable saved image manipulation and analysis as described.For further information on i-SPEED software options, please see the dedicated literature.IRIG-BOnboard IRIG-B receiver provides lock synchronisation/lock exposure to sub 5 microsecond accuracy.ECONOMY MODESUp to nine preset economy modes are available to utilise a smaller area of the CMOS sensor, which provides extended recording times without the need to reduce frame rates. Manual economy mode allows the user to define the sensor size.The i-SPEED FS provides high resolution, extreme low light sensitivity at speeds up to 1,000,000 fps recording, with anelectronic global shutter selectable to 0.2 μs,making the camera suitable for capturing eventhe quickest high speed phenomena.i-CHEQA display of external LED indicators provide confirmation of camera record status - useful in ballistics or crash test environments to provide absolute confidence that tests will be captured.software suite1280 x 1024 resolution @ 2,000 fps1,000,000 fps max 800-404-ATEC (2832)E s t a b l i s h e d 1981Adding value to Olympus i-SPEED camerasLocal, quick and efficient service and repairsProduct and application supportContinued education through local high speed video training coursesLoan equipment available during servicing and repairExpertise in endoscopy, microscopy and non-destructive testingSystem upgradesFlood-LightingFor more information on the products below please see the Olympus RVI Product GuideTo complement the i-SPEED digital high speed video cameras and to suit the varying and demanding needs of high speed video applications, Olympus offers a wide range of lens and lighting accessories.Olympus have over 30 years of experience in the Remote Visual Inspection (RVI) industry and offer a range of products suitable for use with i-SPEED cameras.SERIES 5 BORESCOPESOlympus rigid borescopes offer high quality images andprovide visual access to confined areas. Available in a range of diameters from 4 to 16 mm, the Olympus Series 5 Borescope can be connected to any high speed camera via an optical adaptor to allow capture of high speed applications.INDUSTRIAL FIBERSCOPESOlympus also offer a range of flexible fiberscopes for usewhen direct line access to the inspection area is not available.With diameters ranging from 6 to 11 mm, lengths of up to 3 m and four way tip articulation, Olympus fiberscopes can provide a view of the hardest to reach high speed applications.LIGHT SOURCESOlympus high intensity light sources can provide illumination for applications that require the use of borescopes andfiberscopes or can be utilised for focused illumination of high speed events.OPTICAL ADAPTORSA range of adaptors are available for connection between i-SPEED cameras and Olympus borescopes and fiberscopes.C-mount and F-mount lensesaccessoriesserviceAdvanced Test Equipment Rentals 800-404-ATEC (2832)®E s t a b l i s h e d 1981。

Cisco Meraki MV摄像头系列介绍说明书

Cisco Meraki MV摄像头系列介绍说明书

INTRODUCING MVWith an unobtrusive industrial design suitable for any setting—and available in indoor (MV21) and outdoor (MV71) models—the MV family simplifies and streamlines the unnecessarily complex world of secu-rity cameras. By eliminating servers and video recorders, MV frees administrators to spend less time on deployment and maintenance, and more time on meeting business needs.High-endurance solid state on-camera storage eliminates the concern of excessive upload bandwidth use and provides robust failoverprotection. As long as the camera has power it will continue to record, even without network connectivity. Historical video can be quicklyOVERVIEWCisco Meraki’s MV family of security cameras are exceptionally simple to deploy and configure. Their integration into the Meraki dashboard, ease of deployment, and use of cloud-augmented edge storage, eliminate the cost and complexity required by traditional security camera solutions.Like all Meraki products, MV cameras provide zero-touch deployment. Using just serial numbers, an administrator can add devices to the Meraki dashboard and begin configuration before the hardware even arrives on site. In the Meraki dashboard, users can easily streamvideo and create video walls for monitoring key areas across multiple locations without ever configuring an IP or installing a plugin.searched and viewed using motion-based indexing, and advanced export tools allow evidence to be shared with security staff or law enforcement easily.Because the cameras are connected to Meraki’s cloud infrastructure, security updates and new software are pushed to customers auto-matically. This system provides administrators with the peace of mind that the infrastructure is not only secure, but that it will continue to meet future needs.Simply put, the MV brings Meraki magic to the security camera world.Product Highlights• Meraki dashboard simplifies operation• Cloud-augmented edge storage eliminates infrastructure • Suitable for deployments of all sizes: 1 camera or 1000+• Intelligent motion indexing with search engine• Built-in video analytics tools• Secure encrypted control architecture• No special software or browser plugins required • Granular user access controlsMV21 & MV71Cloud Managed Security CamerasDatasheet | MV SeriesCUTTING EDGE ARCHITECTUREMeraki's expertise in distributed computing has come to the security camera world. With cloud-augmented edge storage, MV cameras provide ground breaking ease of deployment, configuration, and operation. Completely eliminating the Network Video Recorder (NVR) not only reduces equipment CAPEX, but the simplified architecture also decreases OPEX costs. Each MV camera comes with integrated, ultra reliable, industrial-grade storage. This cutting edge technology allows the system to efficiently scale to any size because the storage expands with the addition of each camera. Plus, administrators can rest easyknowing that even if the network connection cuts out, the cameras will continue to record footage.SCENE BEING RECORDEDON-DEVICE STORAGELOCAL VIDEO ACCESS MERAKIREMOTE VIDEOACCESSOPTIMIZED RETENTIONMV takes a unique approach to handling motion data by analyzing video on the camera itself, but indexing motion in the cloud. This hybrid motion-based retention strategy plus scheduled recording give users the ability to define the video retention method that works best for every deployment.The motion-based retention tool allows users to pick the video bit rate and frame rate to find the perfect balance betweenstorage length and image quality. All cameras retain continuous footage as a safety net for the last 72 hours before intelligently trimming stored video that contains no motion, adding one more layer of security.Determine when cameras are recording, and when they’re not, with scheduled recording. Create schedule templates for groups of cameras and store only what’s needed, nothing more. Turn off recording altogether and only view live footage for selective privacy.Best of all, the dashboard provides a real-time retention estimate for each camera, removing the guesswork.10:00:3010:00:2010:00:25EASY TO ACCESS, EASY TO CONTROLThere is often a need to allow different users access but with tailored controls appropriate for their particular roles. For example, a receptionist needing to see who is at the front door probably does not need full camera configuration privileges.The Meraki dashboard has a set of granular controls for defining what a user can or cannot do. Prevent security staff from changing network settings, limit views to only selected cameras, or restrict the export of video: you decide what is possible.With the Meraki cloud authentication architecture, these controls scale for any organization and support Security Assertion Markup Language (SAML) integration.CONFIGURE VIEW-ONLYISOLATE EVENTS, INTELLIGENTLYMeraki MV cameras use intelligent motion search to quickly find important segments of video amongst hours of recordings. Optimized to eliminate noise and false positives, this allows users to retrospectively zero-in on relevant events with minimal effort.MV's motion indexing offers an intuitive search interface. Select the elements of the scene that are of interest and dashboard will retrieve all of the activity that occurred in that area. Laptop go missing? Drag the mouse over where it was last seen and quickly find out when it happened and who was responsible.8:009:00ANALYTICS, BUILT RIGHT INMV's built-in analytics take the average deployment far beyond just security. Make the most of an MVcamera by utilizing it as a sensor to optimize business performance, enhance public safety, or streamline operational objectives.Use motion heat maps to analyze customer behavior patterns or identify where students are congregating during class breaks. Hourly or daily levels ofgranularity allow users to quickly tailor the tool to specific use cases.All of MV's video analytics tools are built right into the dashboard for quick access. Plus, the standard MV license covers all of these tools, with no additionallicensing or costs required.SIMPLY CLOUD-MANAGEDMeraki's innovative GUI-based dashboardmanagement tool has revolutionized networks aroundthe world, and brings the same benefits to networkedvideo surveillance. Zero-touch configuration, remotetroubleshooting, and the ability to manage distributedsites through a single pane of glass eliminate manyof the headaches security administrators have dealtwith for decades. Best of all, dashboard functionalityis built into every Meraki product, meaning additionalvideo management software (VMS) is now a thing ofthe past.Additionally, features like the powerful drag-and-drop video wall help to streamline remote devicemanagement and monitoring — whether cameras aredeployed at one site, or across the globe.SECURE AND ALWAYS UP-TO-DATECentralized cloud management offers one of the mostsecure platforms available for camera operation. Allaccess to the camera is encrypted with a public keyinfrastructure (PKI) that includes individual cameracertificates. Integrated two-factor authenticationprovides strong access controls. Local video is alsoencrypted by default and adds a final layer of securitythat can't be turned off.All software updates are managed automaticallyfor the delivery of new features and to enable rapidsecurity updates. Scheduled maintenance windowsensure the MV family continues to address users'needs with the delivery of new features as part of theall-inclusive licensed service.Camera SpecificationsCamera1/3.2” 5MP (2560x1920) progressive CMOS image sensor128GB high endurance solid state storageFull disk encryption3 - 10mm vari-focal lens with variable aperture f/1.3 - f/2.5Variable field of view:28° - 82° (Horizontal)21° - 61° (Vertical)37° - 107° (Diagonal)Automatic iris control with P-iris for optimal image quality1/5 sec. to 1/32,000 sec. shutter speed*******************************(color)*************(B&W)S/N Ratio exceeding 62dB - Dynamic Range 69dBHardware based light meter for smart scene detectionBuilt-in IR illuminators, effective up to 30 meters (98 feet)Integrated heating elements for low temperature outdoor operation (MV71 only)Video720p HD video recording (1280x720) with H.264 encodingCloud augmented edge storage (video at the edge, metadata in the cloud)Up to 20 days of video storage per camera*Direct live streaming with no client software (native browser playback)**Stream video anywhere with autotmatic cloud proxyNetworking1x 10/100 Base-T Ethernet (RJ45)Compatible with Meraki wireless mesh backhaul (separate WLAN AP required) DSCP traffic markingFeaturesCloud managed with complete integration into the Meraki dashboardPlug and play deployment with self-configurationRemote adjustment of focus, zoom, and apertureDynamic day-to-night transition with IR illuminationNoise optimized motion indexing engine with historical searchShared video wall with individual layouts supporting multiple camerasSelective export capability with cloud proxyHighly granular view, review, and export user permissions with SAML integration Motion heat maps for relative hourly or day-by-day motion overviewMotion alertsPowerPower consumption (MV21) 10.94W maximum via 802.3af PoE Power consumption (MV71) 21.95W maximum via 802.3at PoE+EnvironmentStarting temperature (MV21): -10°C - 40°C (14°F - 104°F)Starting temperature (MV71): -40°C - 40°C (-40°F - 104°F)Working temperature (MV21): -20°C - 40°C (-4°F - 104°F)Working temperature (MV71): -50°C - 40°C (-58°F - 104°F)In the boxQuick start & installation guideMV camera hardwareWall mounting kit, drop ceiling T-rail mounting hardwarePhysical characterisitcsDimensions (MV21) 166mm x 116.5mm (diameter x height)Dimensions (MV71) 173.3mm x 115mm (diameter x height)Weather-proof IP66-rated housing (MV71 only)Vandal-proof IK10-rated housing (MV71 only)Lens adjustment range:65° Tilt350° Rotation350° PanWeight (MV21) 1.028kg (including mounting plate)Weight (MV71) 1.482kg (including mounting plate)Female RJ45 Ethernet connectorSupports Ethernet cable diameters between 5-8mm in diameterStatus LEDReset buttonWarrantyWarranty (MV21) 3 year hardware warranty with advanced replacementWarranty (MV71) 3 year hardware warranty with advanced replacementOrdering InformationMV21-HW: Meraki MV21 Cloud Managed Indoor CameraMV71-HW: Meraki MV71 Cloud Managed Outdoor CameraLIC-MV-XYR: Meraki MV Enterprise License (X = 1, 3, 5, 7, 10 years)MA-INJ-4-XX: Meraki 802.3at Power over Ethernet injector (XX = US, EU, UK, or AU) Note: Each Meraki camera requires a license to operate* Storage duration dependent on encoding settings.** Browser support for H.264 decoding required.Mounting Accessories Specifications Meraki Wall Mount ArmWall mount for attaching camera perpendicular to mounting surfaceIncludes pendant capSupported Models: MV21, MV71Dimensions (Wall Arm) 140mm x 244mm x 225.4mmDimensions (Pendant Cap) 179.9mm x 49.9mm (Diameter x Height)Combined Weight 1.64kgMeraki Pole MountPole mount for poles with diameter between 40mm - 145mm (1.57in - 5.71in)Can be combined with MA-MNT-MV-1: Meraki Wall Mount ArmSupported Models: MV71Dimensions 156.7mm x 240mm x 68.9mmWeight 1.106kgMeraki L-Shape Wall Mount BracketCompact wall mount for attaching camera perpendicular to mounting surfaceSupported Models: MV21, MV71Dimensions 206mm x 182mm x 110mmWeight 0.917kgOrdering InformationMA-MNT-MV-1: Meraki Wall Mount Arm for MV21 and MV71MA-MNT-MV-2: Meraki Pole Mount for MV71MA-MNT-MV-3: Meraki L-Shape Wall Mount Bracket for MV21 and MV71。

LG 50PG20 高清平板电视产品说明书

LG 50PG20 高清平板电视产品说明书

PLASMA HDTV50” Class (49.9” diagonal)• 720p HD Resolution• XD Engine™• 1,000,000:1 Dynamic Contrast Ratio• Fluid Motion (180Hz Effect)• 3x HDMI™ V.1.3 with Deep Color• AV Mode (Cinema, Sports, Game)• Clear Voice• LG SimpLink™ Connectivity• Invisible Speaker System• 100,000 Hours to Half Brightness• PC InputHD RESOLUTIONHigh definition television is the highest performance segment of the DTV system used in the US. It’s a wide screen, high-resolution video image, coupled with multi-channel, compact-disc-quality sound.FLUID MOTION (180 HZ EFFECT)Enjoy smoother, clearer motion with all types of programming such as sports and action movies. The moving picture resolution give the impression of performance of up to 3x the panels actual refresh rate.AV MODELG HDTVs include 3 AV Modes, preset to optimize picture and sound settings based on “Cinema”, “Sports” or “Game” content. AV Modes can be easily set with a convenient button on the remote control.CLEAR VOICE TECHNOLOGYAutomatically enhances and amplifies the sound of human voice frequency range to help keep dialogue audible when background noise swells. INVISIBLE SPEAKERS SYSTEMLG’s 2008 line of TVs include a unique invisible speaker system, tuned by renowned audio expert, Mr. Mark Levinson. Speakers are embedded in strategic spots behind the front cabinet and use minute vibrations to turn the entire front bezel into the speaker system. The result is a clean, polished look, and enhanced audio by increasing the “sweet spot”, giving a wider sound field.1000 Sylvan Avenue Englewood Cliffs, NJ 07632© 2008 LG Electronics U.S.A., Inc., Englewood Cliffs, NJ. All rights reserved. “LG Life’s Good” is a registered trademark of LG Corp. 48.2”31.1”33.4”27.8”14.3”15.8”(400mm)(400mm)S I D E。

海康威视4MP42×网络红外速度镀铜摄像头说明书

海康威视4MP42×网络红外速度镀铜摄像头说明书

DS-2DF8442IXS-AELWY(T5)4 MP 42 × Network IR Speed DomeHikvision DS-2DF8442IXS-AELWY(T5) 4 MP 42 × Network IR Speed Dome adopts 1/1.8" progressive scan CMOS chip. With the 42 × optical zoom lens, the camera offers more details over expansive areas. This series of cameras can be widely used for wide ranges of high-definition, such as the rivers, forests, roads, railways, airports, ports, squares, parks, scenic spots, stations and large venues, etc.⏹1/1.8" Progressive Scan CMOS⏹High quality imaging with 4 MP resolution⏹Excellent low-light performance with DarkFighter technology⏹42 × optical zoom and 16 × digital zoom provide close up views over expansive areas⏹Expansive night view with up to 400 m IR distance⏹Water and dust resistant (IP67)⏹Supports face capture to detect, capture, grade, and select faces in motion⏹Supports road traffic to detect vehiclesSpecificationCameraImage Sensor 1/1.8" progressive scan CMOSMin. Illumination Color: 0.001 Lux @ (F1.2, AGC ON), B/W: 0.0005 Lux @ (F1.2,AGC ON), 0 Lux with IR Shutter Speed 1/1 s to 1/30,000 sDay & Night IR cut filterZoom 42 × optical, 16 × digitalDORI Detect (25 px/m): 2965.5 m Observe (63 px/m): 1176.8 m Recognize (125 px/m): 604.1 m Identify (250 px/m): 302.1 mMax. Resolution 2560 × 1440LensFocus Auto, semi-auto, manual, rapid focus Focal Length 6.0 mm to 252 mmZoom Speed Approx. 4.5 s (optical, wide-tele)FOV Horizontal field of view: 56.6° to 1.7° (wide-tele), Vertical field of view: 33.7° to 0.9° (wide-tele), Diagonal field of view: 63.4° to 1.9° (wide-tele)Aperture Max. F1.2IlluminatorSupplement Light Type IRSupplement Light Range Up to 400 mSmart Supplement Light YesPTZMovement Range (Pan) 360°Movement Range (Tilt) -20° to 90° (auto flip)Pan Speed Pan speed: configurable from 0.1° to 210°/s; preset speed: 280°/s Tilt Speed Tilt speed: configurable from 0.1° to 150°/s, preset speed 250°/s Proportional Pan YesPresets 300Patrol Scan 8 patrols, up to 32 presets for each patrolPattern Scan 4 pattern scans, record time over 10 minutes for each scan Power-off Memory YesPark Action Preset, pattern scan, patrol scan, auto scan, tilt scan, random scan, frame scan, panorama scan3D Positioning Yes PTZ Status Display Yes Preset Freezing YesScheduled Task Preset, pattern scan, patrol scan, auto scan, tilt scan, random scan, frame scan, panorama scan, dome reboot, dome adjust, aux outputVideoMain Stream 50 Hz: 25 fps (2560 × 1440, 1920 × 1080, 1280 × 960, 1280 × 720) 60 Hz: 30 fps (2560 × 1440, 1920 × 1080, 1280 × 960, 1280 × 720)Sub-Stream 50 Hz: 25 fps (704 × 576, 640 × 480, 352 × 288) 60 Hz: 30 fps (704 × 480, 640 × 480, 352 × 240)Third Stream 50 Hz: 25 fps (1920 × 1080, 1280 × 960, 1280 × 720, 704 × 576, 640 × 480, 352 × 288) 60 Hz: 30 fps (1920 × 1080, 1280 × 960, 1280 × 720, 704 × 480, 640 × 480, 352 × 240)Video Compression Main stream: H.265+/H.265/H.264+/H.264 Sub-stream: H.265/H.264/MJPEGThird stream: H.265/H.264/MJPEGH.264 Type Baseline Profile/Main Profile/High ProfileH.265 Type Main profileScalable Video Coding (SVC) H.264 and H.265 encodingRegion of Interest (ROI) 8 fixed regions for each streamAudioAudio Compression G.711alaw, G.711ulaw, G.722.1, G.726, MP2L2, PCMAudio Bit Rate 64 Kbps (G.711)/16 Kbps (G.722.1)/16 Kbps (G.726)/32~192 Kbps (MP2L2) NetworkProtocols IPv4/IPv6, HTTP, HTTPS, 802.1x, Qos, FTP, SMTP, UPnP, SNMP, DNS, DDNS, NTP, RTSP, RTCP, RTP, TCP/IP, DHCP, PPPoE, UDP, IGMP, ICMP, BonjourAPI ISUP, ISAPI, Hikvision SDK, Open Network Video Interface (Profile S, Profile G, Profile T) Simultaneous Live View Up to 20 channelsUser/Host Up to 32 users. 3 levels: Administrator, Operator and UserSecurity Password protection, complicated password, HTTPS encryption, 802.1X authentication (EAP-TLS, EAP-LEAP, EAP-MD5), watermark, IP address filter, basic and digest authentication for HTTP/HTTPS, RTP/RTSP over HTTPS, control timeout settings, security audit log, TLS 1.3, host authentication (MAC address)Client HikCentral, iVMS-4200, Hik-ConnectWeb Browser IE11, Chrome 57.0+, Firefox 52.0+, Safari 11+ ImageImage Settings Saturation, brightness, contrast, sharpness, gain, and white balance adjustable by client software or web browserDefog Optical defogImage Enhancement BLC, HLC, 3D DNRWide Dynamic Range (WDR) 140 dB WDRImage Stabilization Yes. Built-in gyroscope to improve EIS performance.Regional Exposure YesRegional Focus YesPrivacy Mask Up to 24 masks, polygon region, mosaic mask, mask color configurable InterfaceEthernet Interface 1 RJ45 10M/100M self-adaptive Ethernet port; Hi-PoEOn-board Storage Built-in memory card slot, support MicroSD/MicroSDHC/MicroSDXC, up to 256 GB Alarm 7 inputs, 2 outputsAudio 1 input (line in), max. input amplitude: 2-2.4 vpp, input impedance: 1 KΩ ± 10%; 1 output (line out), line level, output impedance: 600 ΩVideo Output 1.0V[p-p]/75Ω, PAL, NTSC, BNC connector RS-485 HIKVISION, Pelco-P, Pelco-D, self-adaptiveEventBasic Event Motion detection, video tampering alarm, alarm input and output, exceptionSmart Event Line crossing detection, region entrance detection, unattended baggage detection, object removal detection, intrusion detection, region exiting detection, vandal-proof alarm, audio exception detectionSmart Tracking Manual tracking, auto-trackingAlarm Linkage Alarm actions, such as Preset, Patrol Scan, Pattern Scan, Memory Card Video Record, Trigger Recording, Notify Surveillance Center, Upload to FTP/Memory Card/NAS, Send Email, etc.Deep Learning FunctionFace Capture Support detecting up to 30 faces at the same time. Support detecting, tracking, capturing, grading, selecting of face in motion, and output the best face picture of the faceFace Comparison YesPerimeter Protection Line crossing, intrusion, region entrance, region exitingSupport alarm triggering by specified target types (human and vehicle)Road Traffic and Vehicle DetectionRoad Traffic Support vehicle detection (license plate number,vehicle model, and vehicle color recognition)GeneralWiper yesGeneral Function Mirror, password protection, watermark, IP address filterPower 24 VAC (Max. 60 W, including max. 18 W for IR and max. 6 W for heater), Hi-PoE (Max.50 W, including max. 18 W for IR and max. 6 W for heater)Operating Condition Temperature: -40°C to 70°C (-40°F to 158°F), Humidity: ≤ 95% Dimension ⌀ 266.6 mm × 410 mm (⌀ 10.50" × 16.14")Weight Approx. 8 kg (17.64 lb.)Language 33 languages: English, Russian, Estonian, Bulgarian, Hungarian, Greek, German, Italian, Czech, Slovak, French, Polish, Dutch, Portuguese, Spanish, Romanian, Danish, Swedish, Norwegian, Finnish, Croatian, Slovenian, Serbian, Turkish, Korean, Traditional Chinese, Thai, Vietnamese, Japanese, Latvian, Lithuanian, Portuguese (Brazil), UkrainianApprovalEMC FCC SDoC (47 CFR Part 15, Subpart B);CE-EMC (EN 55032: 2015, EN 61000-3-2: 2019, EN 61000-3-3: 2013, EN 50130-4: 2011 +A1: 2014);RCM (AS/NZS CISPR 32: 2015);IC VoC (ICES-003: Issue 6, 2019);KC (KN 32: 2015, KN 35: 2015)Safety UL (UL 62368-1)CB (IEC 60950-1:2005 + Am 1:2009 + Am 2:2013, IEC 62368-1:2014); CE-LVD (EN 62368-1:2014+A11:2017),BIS (IS 13252(Part 1):2010+A1:2013+A2:2015);LOA (SANS IEC60950-1)Environment CE-RoHS (2011/65/EU); WEEE (2012/19/EU); Reach (Regulation (EC) No 1907/2006)Anti-Corrosion Protection NEMA 4X, WF2ProtectionIP67 Standard, Lightning Protection, Surge Protection and Voltage Transient Protection, ±6kV Line to Gnd, ±3kV Line to Line, IEC61000-4-5⏹Typical ApplicationHikvision products are classified into three levels according to their anti-corrosion performance. Refer to the following description to choose for your using environment.This model has MODERATE PROTECTION.LevelDescriptionTop-level protectionHikvision products at this level are equipped for use in areas where professional anti-corrosion protection is a must. Typical application scenarios include coastlines, docks, chemical plants, and more.Moderate protectionHikvision products at this level are equipped for use in areas with moderate anti-corrosion demands. Typical application scenarios include coastal areas about 2 kilometers (1.24 miles) away from coastlines, as well as areas affected by acid rain.No specific protectionHikvision products at this level are equipped for use in areas where no specific anti-corrosion protection is needed.⏹Dimension⏹Available ModelDS-2DF8442IX-AELWY(T5)⏹Accessory⏹OptionalDS-1604ZJ-BOX-Corner-Y DS-1604ZJ-BOX-Y。

High Performance Imaging Using Large Camera Arrays

High Performance Imaging Using Large Camera Arrays

High Performance Imaging Using Large Camera ArraysBennett Wilburn 1∗Neel Joshi 2†Vaibhav Vaish 2Eino-Ville Talvala 1Emilio Antunez 1Adam Barth 2Andrew Adams 2Mark Horowitz 1Marc Levoy 21ElectricalEngineering Department 2ComputerScience Department Stanford UniversityStanfordUniversity(a)(b)(c)Figure 1:Different configurations of our camera array.(a)Tightly packed cameras with telephoto lenses and splayed fields of view.This arrangement is used for high-resolution imaging (section 4.1).(b)Tightly packed cameras with wide-angle lenses,which are aimed to share the same field of view.We use this arrangement for high-speed video capture (section 4.2)and for hybrid aperture imaging (section 6.2).(c)Cameras in a widely spaced configuration.Also visible are cabinets with processing boards for each camera and the four host PCs needed to run the system.AbstractThe advent of inexpensive digital image sensors and the ability to create photographs that combine information from a number of sensed images are changing the way we think about photography.In this paper,we describe a unique array of 100custom video cam-eras that we have built,and we summarize our experiences using this array in a range of imaging applications.Our goal was to ex-plore the capabilities of a system that would be inexpensive to pro-duce in the future.With this in mind,we used simple cameras,lenses,and mountings,and we assumed that processing large num-bers of images would eventually be easy and cheap.The applica-tions we have explored include approximating a conventional single center of projection video camera with high performance along one or more axes,such as resolution,dynamic range,frame rate,and/or large aperture,and using multiple cameras to approximate a video camera with a large synthetic aperture.This permits us to capture a video light field,to which we can apply spatiotemporal view inter-polation algorithms in order to digitally simulate time dilation and camera motion.It also permits us to create video sequences using custom non-uniform synthetic apertures.∗email:wilburn@ †NeelJoshi is now at the University of California,San Diego.CR Categories:I.4.1[Image Processing and Computer Vi-sion]:Digitization and Image Capture—imaging geometry,sam-pling;C.3[Computer Systems Organization]:Special Purpose and Application-Based Systems—real-time and embedded systems Keywords:camera arrays,spatiotemporal sampling,syntheticaperture1IntroductionOne of the economic tenets of the semiconductor industry is prod-ucts that sell in large volumes are cheap,while products that sell in lower volumes are more expensive,almost independent of the com-plexity of the part.For computers,this relationship has changed the way people think about building high-end systems;rather than building a custom high-end processor,it is more cost effective to use a large number of commodity processors.We are now seeing similar trends in digital imaging.As the pop-ularity of digital cameras grows,the performance of low-end im-agers continues to improve,while the cost of the high-end cameras remains relatively constant.In addition,researchers have shown that multiple images of a static scene can be used to expand the performance envelope of these cameras.Examples include creat-ing images with increased resolution [Szeliski 1994]or dynamic range [S.Mann and R.W.Picard 1994;Debevec and Malik 1997].In other work,Schechner and Nayar used spatially varying filters on a rotating camera to create high-resolution panoramas that also had high dynamic range or high spectral resolution [Schechner and Na-yar 2001].Another use for multiple views is view interpolation to create the illusion of a smoothly moving virtual camera in a staticor dynamic scene[Levoy and Hanrahan1996;Gortler et al.1996; Rander et al.1997;Matusik et al.2000].Most of these efforts employ a single moving high-quality cam-era viewing a static scene.To achieve similar results on dynamic scenes,multiple cameras are required.This motivated us in1999 to think about designing aflexible array containing a large num-ber of inexpensive video imagers.The multiple camera array that resulted consists of100video cameras,each connected to its own processing board.The processing boards are capable of local image computation,as well as MPEG2compression.In section2,we review prior work in building multiple video cam-era systems.While these systems are generally directed at specific applications,they provide valuable insights into the requirements for aflexible capture system.Section3gives an overview of our multiple camera array and explains in a little more depth the fea-tures we added to make it a general purpose research tool.The rest of this paper focuses on our recent results using the cam-era array in different imaging applications.We start by exploring ways of using multiple cameras to create an aggregate virtual cam-era whose performance exceeds the capability of an individual cam-era.Since these applications intend to approximate a camera with a single center of projection,they generally use densely packed cameras.In particular,section4explores the creation of a very high-resolution video camera in which the cameras are adjusted to have modestly overlappingfields of view.We then aim the cameras inward until theirfields of view overlap completely,and we use our system’sfine timing control to provide a virtual video camera with a very high frame-rate.In both of these applications,the large number of cameras provide some opportunity that would not be present in a single camera system.For the virtual high-resolution imager,one can perform exposure metering individually on each camera,which for scenes with spatially varying brightness allows us to form a mosaic with high dynamic range.For the virtual high-speed imager,one can integrate each frame for longer than one over the frame-rate,thereby capturing more light per unit time than is possible using a single high-speed camera.Sections5and6consider applications in which the cameras are spread out,thereby creating a multi-perspective video camera.One important application for this kind of data is view interpolation, whose goal is to move the virtual observer smoothly among the cap-tured viewpoints.For video lightfields,the problem becomes one of spatiotemporal interpolation.Section5shows that the optimal sam-pling pattern to solve this problem uses cameras with staggered,not coincident,trigger times.It also describes a spatiotemporal inter-polation method that uses a novel opticalflow variant to smoothly interpolate data from the array in both time and virtual camera po-sition.In section6we consider combining the images from multiple view-points to create synthetic aperture image sequences.If we align, shift,and average all the camera images,then we approximate a camera with a very large aperture.By changing the amount of the shift,we can focus this synthetic camera at different -ing the processing power on each camera board,we can focus the synthetic aperture camera in real time,i.e.during video capture. Alternatively,we can shape the aperture to match particular char-acteristics of the scene.For example,we freeze a high-speed fan embedded in a natural scene by shaping the aperture in both time and space.2Early Camera ArraysThe earliest systems for capturing scenes from multiple perspec-tives used a single translating camera[Levoy and Hanrahan1996] and were limited to static scenes.Dayton Taylor extended this idea to a dynamic scene by using a linear array of still cameras[Taylor 1996].By triggering the cameras simultaneously and hopping from one camera image to the next,he created the illusion of virtual cam-era movement through a“frozen”dynamic scene.Manex Entertain-ment used more widely spaced cameras and added an adjustable trigger delay between cameras to capture images corresponding to a virtual high-speed cameraflying around their scenes.Both of these systems used still cameras,so they were limited to capturing one specific virtual camera trajectory through space and time that wasfixed by the camera arrangement.For capturing a more general data set,researchers turned to arrays of video cameras.Like still cameras,video cameras must be syn-chronized,but they also present a new challenge:enormous data rates.The pioneering multiple video camera array design is the Virtualized Reality TM project[Rander et al.1997].Their goal was to capture many views of a scene for video view interpolation.The first version of their system records video using VCRs,giving them practically unlimited recording durations but low quality.Their sec-ond version uses49video cameras capturing to PC main memory. This system has better quality(VGA resolution at30frames per second),but is limited to nine-second capture durations.Every third camera captures color video.To handle the bandwidth of the video cameras,they require one PC for every three cameras.While the Virtualized Reality TM project uses relatively high quality cameras,two other groups experimented with large arrays of inex-pensive cameras.Yang et al.’s Distributed Light Field Camera ren-ders live dynamic lightfields from an8x8array of commodity we-bcams[Yang et al.2002].Zhang and Chen’s Self-Reconfigurable Camera Array uses48commodity Ethernet cameras with electronic horizontal translation and pan controls to improve view interpola-tion results[Zhang and Chen2004a;Zhang and Chen2004b].Al-though the design of these systems make them much cheaper than Virtualized Reality TM in terms of per camera costs,significant com-promises were made to use these commodity cameras.First,neither of the arrays could be synchronized,causing artifacts in the view reconstructions.Furthermore,since they were looking at single ap-plications,neither system addressed the bandwidth challenges of building a general purpose large camera array.Yang et al.chose to implement a“finite-view”system,meaning each camera transmits only enough data to reconstruct a small number of lightfield views per frame time.Zhang and Chen’s cameras use JPEG compression, but their choice of Ethernet and a single computer to run the array limits them to a resolution of320x240pixels at15-20frames per second.Results from these efforts helped guide our system design.Since our goal was to create a general purpose system,we wanted tight control over both the timing of cameras and their positions.We also needed to be able to record the data from all the cameras,but with far fewer PCs than the Virtualized Reality TM system.The system that we designed to address these goals is described next.3The Multiple Camera ArrayWhile we had wanted to use“off-the-shelf”technology to build our camera array,it became clear early on that none of the commercial video cameras would have both the timing and positioningflexi-bility that our system required.As a result,we decided to buildFigure2:Our camera tiles contain an Omnivision8610image sen-sor,passive electronics,and a lens mount.The ribbon cables carry video data,synchronization signals,control signals,and power be-tween the tile and the processing board.To keep costs low,we use fixed-focus,fixed-aperture lenses.a custom imaging array,but one in which we leveraged existing standards as much as possible to minimize the amount of custom hardware that the system required for operation.A description of a preliminary version of this system was published in Wilburn et al. [2002].3.1Hardware ComponentsOur system consists of three main subsystems:cameras,local pro-cessing boards,and host PCs.The cameras are mounted on small printed circuit boards to give us maximumflexibility in their ar-rangement.Each camera tile is connected to a local processing board through a2m long ribbon cable.These processing boards configure each of the cameras and can locally process the image data before sending it out to the host computer in either its raw form or as an MPEG2video stream.A set of4PCs hosts the sys-tem,either storing the collected data to disk,or processing it for real time display.Camera Tiles.One of the most critical decisions for the array was the choice of image sensors and their optical systems.While we thought it was reasonable to assume that computation would con-tinue to get cheaper,we found it more difficult to make that same argument for high-quality lenses.Thus,we choose to use inexpen-sive lenses and optics as well as inexpensive sensors.In particular, we chose CMOS image sensors with Bayer Mosaic colorfilter ar-rays[Bayer1976].Although they have more image noise than CCD imagers,CMOS sensors provide a digital interface rather than an analog one,and they offer convenient digital control over gains, offsets,and exposure times.This makes system integration easier. Figure2shows one of our camera tiles.For indoor applications,one typically wants a large working volume and a large depth offield. For these reasons,we use Sunex DSL841B lenses with a6.1mm focal length,an F/#of2.6,and relatively wide diagonalfield of view of57◦.For applications that require a narrowfield of view (usually outdoors),we use Marshall Electronics V-4350-2.5lenses with a50mmfixed focal length,an F/#of2.5,and a diagonalfield of view of6◦.Both sets of optics include an IRfilter.The camera tiles measure30mm on a side and mount to supports using three spring-loaded screws.These screws not only hold the cameras in place but also let us change their orientations roughly 20◦in any direction.For tightly packed camera arrangements,we mount the tiles directly to sheets of acrylic.For more widely spaced arrangements,we have designed plastic adapters that connect the tiles to80/20(an industrial framing system)components.Figure3:Camera processing board blockdiagramFigure4:Camera processing boardLocal Processing Boards.Figure3shows a block diagram of a complete camera system,andfigure4shows the processing board for one camera.The processing board hasfive major subsystems: a micro-controller and its memory,an MPEG2compressor,an IEEE1394interface,a clock interface,and an FPGA which acts as master data router and programmable image computation unit. By choosing established standards,most of these subsystems could be implemented with existing off the shelf chip sets.We chose the IEEE1394High Performance Serial Bus[Anderson 1999](also known as FireWire R and i-Link R )as our interface between the processing boards and the PCs.It guarantees a default bandwidth of40MB/s for isochronous transfers,i.e.data that is sent at a constant rate.This is perfect for streaming video,and indeed many digital video cameras connect to PCs via IEEE1394.It is also well suited for a modular,scalable design because it allows up to 63devices on each bus and supports plug and play.Another benefit of IEEE1394is the cables between devices can be up to4.5m long, and an entire bus can span over250m.Thus,cameras based on such a system could be spaced very widely apart,possibly spanning the side of a building.Even with this high-speed interface,an array of100video cameras (640x480pixel,30fps,one byte per pixel,Bayer Mosaic)would require roughly25physical buses to transfer the roughly1GB/sec of raw data,and a comparable number of PCs to receive it.Rather than limiting the image size or frame rate,we decided to compress the video using MPEG2before sending it to the host.The default 4Mb/s bitstream produced by our SONY encoders translates into a compression ratio of17.5:1for640x480,30fps video.To ensure that compression does not introduce artifacts into our applications, we designed the cameras to simultaneously store up to20frames of raw video to local memory while streaming compressed video. This lets us compare MPEG2compressed video with raw video as an offline sanity check.Figure5:Camera array architectureAn embedded microprocessor manages the components in the cam-era and communicates with the host PCs over IEEE1394.The FPGA is used to route the image data to the correct destination, usually either the IEEE1394chipset or the MPEG2compression chip.It can also be configured to operate directly on the image data using its local DRAM for storing temporaries and constants and the SRAM as a frame buffer.Code in a small boot ROM configures the IEEE1394interface so that host PCs can download a more sophis-ticated executable and configuration code to the board.3.2System ArchitectureFigure5shows the high-level architecture of our system.Each of our cameras is a separate IEEE1394device with three ports.The cameras are connected in a tree,with one port connecting to a par-ent and one or two ports leading to child nodes.The parent port of the root node is connected to the host computer,which has two striped IDE hard drives to capture the image data.For large arrays, we must use multiple PC’s and IEEE1394buses.Theoretically,the 40MB/s streaming bandwidth of IEEE1394should accommodate 62compressed video streams,but implementation details(bus arbi-tration and our inability to get cycle-accurate control over the bus) limits us to30cameras per bus.We run a networked camera control application that lets us drive the operation of the entire array from one PC.The timing requirements for the array were stricter than could be achieved using IEEE1394communication,especially with multiple PCs.To achieve the desired timing tolerance,we route a common clock and trigger signals to the entire array using an extra set of CAT5cables.These cables roughly match the IEEE1394topology, except they form a single tree even if multiple IEEE1394buses are used.A single“master”root board in the array generates its own 27MHz clock and sends it to two children via CAT5cables,which then buffer the clock and send it to two more children,and so on. The master also generates a trigger which is buffered and repeated to all other boards.This trigger is used to synchronize the cameras and provides a timing signal with no more than200ns of skew be-tween any two processing boards.To put this in perspective,200ns is one thousandth of our minimum integration time of205µs.Most systems would use the trigger to synchronize all of the cam-eras.In fact the early prototype of our system[Wilburn et al.2002] used it for this purpose as well.Thefinal system provides an arbi-trary,constant temporal phase shift for each camera.Because the timing signals for the image sensors are generated by the FPGAs, this was done by adding programmable timer reset values to the FPGA code.Thus,using just one trigger signal,we can reset all of the cameras to arbitrary phase offsets.3.3ResultsOur multiple camera array captures VGA video at30frames per second(fps)from100cameras to four PCs.The default MPEG bit rate is4Mb/s,but we are free to alter the bit rate or even stream I-frame only video.At4Mb/s,we can capture sequences up to two and a half minutes long before we reach the2GBfile size limit of our operating system.We have not yet needed to extend this limit.4Improved Imaging PerformanceBy combining data from an array of cameras,we can create an aggregate virtual camera with greatly improved performance.Al-though one could design optical systems that ensure a common cen-ter of projection for all of the cameras,these systems become costly and complex as the number of cameras grows.Instead,we pack the cameras as closely as possible to approximate a single center of pro-jection and compensate for parallax in software.Here,we discuss two high-performance applications:high-resolution,high-dynamic range video capture;and high-speed video capture.4.1High-Dynamic Range and High-ResolutionVideoIf we tightly pack our cameras and aim them with abutting or par-tially overlappingfields of view,we create a high-resolution video ing this configuration and existing techniques from the image mosaicing literature,we can register and blend the images to create a single image of high resolution.One advantage of us-ing many cameras for this task is that we can meter them individu-ally.This allows us to capture scenes with a greater dynamic range than our cameras can record individually,provided that the dynamic range in each camera’s narrowfield of view is small enough.For scenes in which even the local dynamic range exceeds our sensors’capabilities,we can trade resolution for dynamic range by increas-ing the overlap of the cameras’fields of view,so that each viewing ray is observed by multiple cameras with different exposure set-tings.To demonstrate this idea,we arranged our cameras in a dense12x8 array with approximately50%overlappingfields of view,shown in figure1(a).Each camera has a telephoto lens with a roughly six degree diagonalfield of view.With50%overlap between adjacent cameras,most points in the scene are observed by four cameras,and the entire array has a totalfield of view of30degrees horizontally and15degrees vertically.Color Calibration.Because the inexpensive sensors in our array have varying color responses,we must color match them to pre-vent artifacts in the image mosaic.Color calibration is important in any application involving multiple cameras,but it is critical in this application,since different parts of the image are recorded by different cameras.We must also determine the response curves of our cameras if we wish to create high dynamic range images.With gamma correction turned off in the cameras,the response curves of our sensors are reasonably linear except at the low and high ends of their output range.We have devised an automatic color match-ing routine that forces this linear response to be identical for all of the cameras and color channels by iteratively adjusting the offsets and gains for each color channel in every camera.Our goal is to ensure uniformity,not absolute accuracy–ourfinal mosaics can be converted to another color space with one last transformation.Each iteration of our calibration routine takes images of a white target under several different exposure levels.The target is placed close enough to the array tofill thefield of view of all cameras. The exposure setting is the actual duration for which the sensor in-tegrates light and is very accurate.The routine calculates the slopes and offsets of the sensor responses,then computes new settings to match a target response.We choose a line mapping the minimum response to20and the maximum to220,safely inside the linear range of our sensors.Doing this for each channel using images of a white target also white balances our sensors.The entire process takes less than one minute.Assembling HDR Image Mosaics.We use Autostitch[Brown and Lowe2003])to create our image mosaics.Autostitch uses a scale-invariant feature detector to detect corresponding features in overlapping images,bundle adjustment to estimate globally op-timal homographies to align all of the images,and a multi-band blending algorithm to combine the registered images into a single mosaic.The cameras need not be precisely aimed,because Au-tostitchfinds appropriate homographies to perform seamless image stitching.Given the34mm separation of our cameras and our scene, roughly120m away,we can tolerate+/-20m of depth variation with less that0.5pixels of disparity in the mosaiced image.For our application,we have modified Autostitch in two ways. First,we use our response curves and the cameras’exposure du-rations to transform pixel values from the cameras into afloating point,relative irradiance value before blending.Thus,the output of the blending is afloating point image.Our second modification is replacing the weights for the multi-band blend with a confidence measure that is high for pixel values in the middle of the sensor response and low for saturated or underexposed pixels,as well as being low for pixels at the edges of each camera.Results.Figure6shows a comparison of3800x2000pixel mo-saics captured with uniform and individually selected camera ex-posure times.The uniform exposure loses details in the brightly lit hills and dark foreground trees.The individually metered cameras capture a wider range of intensities,but they still have saturated and under-exposed pixels where their dynamic range is exceeded.An even better picture can be acquired by taking advantage of the cam-eras’overlappingfields of view to image each point with different exposure durations.Figure7(a)shows a mosaic captured using cameras with one of four exposure times(0.20ms,0.62ms,1.4ms, and3.07ms).The increased local dynamic range can be seen in the covered walkway in the inset(c).To evaluate the overall image quality,we took a picture using a3504 x2336pixel Canon20D configured with nearly the samefield of view and compared it to one frame of our high-resolution video (figure7(b)).The results are encouraging.While the insets show that the Canon image is superior,the effective resolution difference is modest.Plotting pixel intensities across edges in the two im-ages showed that the Canon’s resolution is roughly1.5times better. Since we could easily add cameras,or reduce overlap to increase resolution,this degraded resolution is not a serious limitation.In fact,resolution chart measurements with our cameras indicate that their effective resolution is about400pixels horizontally,not640, so the resolution of the mosaic is not much worse than what we see from a single camera.What is more surprising is that the contrast of our image mosaic is noticeably worse than the D20.This is due to light leakage and aberrations in the lenses.Overall,these results show that it is pos-sible to use large numbers of inexpensive cameras to build a virtual camera of both high dynamic range and high resolution.In this example we use large overlaps so four cameras view each pixel. Our array can easily be configured to reduce the overlap and create larger mosaics.For example,reducing the camera overlap to10% would yield very large mosaics(roughly6900x3500pixels)us-ing the same number of cameras.(Remember that these are video cameras;we know of no non-classified video camera of comparable resolution.)Thisflexibility raises the question of how to optimally allocate camera views for imaging.This answer in turn depends on the dynamic range of the scene and the algorithm used for adap-tively setting the exposure times.We are starting to look at adaptive metering algorithms for camera arrays to address this issue.4.2High-Speed VideoThe previous application takes advantage of ourflexible mounting system and exposure control to increase the resolution and dynamic range of video capture.The timing precision of our array offers an-other opportunity for creating a high-performance aggregate cam-era:high-speed video capture.We have previously described a method for configuring the array as a single,virtual,high-speed video camera by evenly staggering the camera trigger times across the30Hz frame time[Wilburn et al.2004].Using52tightly packed cameras oriented with wholly overlappingfields of view,we simu-lated a1560frame per second(fps)video camera.One benefit of using a camera array for this application is that frame rate scales linearly with the number of cameras.Also,compress-ing the video in parallel at each camera reduces the instantaneous data rate and permits us to stream continuously to disk for several minutes.By contrast,typical commercial high-speed cameras are limited to capture durations thatfit in local memory,often as low as a few seconds,and require some means to synchronize the cap-ture with the high-speed event.Finally,unlike a single camera,the exposure time for each frame can be greater than the inverse of the high-speed frame rate.In other words,we can overlap frame times among the cameras.This allows us to collect more light and reduce noise in our images at the cost of increased motion blur.By tem-porally deconvolving the captured video,we can recover some of the lost temporal resolution[Wilburn et al.2004;Shechtman et al. 2002].As with the image mosaics before,we must account for the slight parallax between views from different cameras.We assume a rel-atively shallow or distant scene and use planar homographies to align the images from all cameras to the desired object plane.This leads to artifacts for objects not at the assumed scene depth.In the next section,we extend this high-speed method to the case of more widely spaced cameras,and in section5.2we describe a technique for interpolating between the views produced by the cameras.As we will see,this technique can also be used to correct the misalign-ments in our high-speed video.5Spatiotemporal SamplingWe now turn to a different regime for the array:cameras spaced to sample a very wide spatial aperture.Data captured from such arrangements can be used for synthetic aperture photography,view interpolation,and analysis of scene structure and motion.We treat synthetic aperture photography in section6.For the other two ap-plications,a major challenge is establishing correspondences be-tween points in different views.Generally speaking,algorithms for computing correspondences perform better when the motion be-tween views is minimized.In this section,we show how to reduce image motion between views of dynamic scenes by staggering cam-era trigger times.Section5.2describes a new view interpolation algorithm based on opticalflow.。

SYSTEMS AND METHODS FOR PERFORMING HIGH SPEED VIDE

SYSTEMS AND METHODS FOR PERFORMING HIGH SPEED VIDE
请下载全文后查看
申请号:EP 158894 06.3 申请日:201504 17 公开号:EP 3284 061B 1 公开日:20211110
摘要:High speed video capture and depth estimation using array cameras is disclosed. Real world scenes typically include objects located at different distances from a camera. Therefore, estimating depth during video capture by an array camera can result in smoother rendering of video from image data captured of real world scenes. One embodiment of the invention includes cameras that capture images from different viewpoints, and an image processing pipeline application that obtains images from groups of cameras, where each group of cameras starts capturing image data at a staggered start time relative to the other groups of cameras. The application then selects a reference viewpoint and determines scene-dependent geometric corrections that shift pixels captured from an alternate viewpoint to the reference viewpoint by performing disparity searches to identify the disparity at which pixels from the different viewpoints are most similar. The corrections can then be used to render frames of video.

At the same time high speed of the picture noise i

At the same time high speed of the picture noise i

专利名称:At the same time high speed of the picture noise in the video processing systemrobustness the device and method forpresuming发明人:ウィッティグ カール申请号:JP2004541067申请日:20030924公开号:JP2006502616A公开日:20060119专利内容由知识产权出版社提供摘要:Using only the single video picture, method and the device in order to decide picture noise estimator in the digital video processing system are stated. In this method and the device, the plural SAD values in the video picture are calculated. Vis-a-vis each SAD value, whether or not SAD value anticipates each and one among the plural specified SAD value ranges which correspond to noise estimator or enters into many ranges than that, concerning you can do decision. You can line up particular decision vis-a-vis everything of the specified SAD value range. Vis-a-vis each one of the SAD value range, the number of SAD values which enter into the range of specified SAD value is counted. The estimated picture noise estimator which possesses the count value which coincides to specified standard then, it is selected as the picture noise estimator in the digital video processing system.申请人:コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ代理人:津軽 進,宮崎 昭彦,笛田 秀仙更多信息请下载全文后查看。

HIGH-SPEED DIGITAL VIDEO CAMERA SYSTEM AND CONTROL

HIGH-SPEED DIGITAL VIDEO CAMERA SYSTEM AND CONTROL
申请人:KEYMED (MEDICAL & INDUSTRIAL EQUIPMENT) LIMITED 地址:KeyMed House Stock Road Southend-on-Sea Essex SS2 5QH GB 国籍:GB 代理机构:Frost, Alex John 更多信息请下载全文后查看
申请号:EP 04 715363.0 申请日:2004 0227 公开号:EP 164 24 52A1 公开日:200604 05
摘要:A high-speed digital video camera (3) and controller display unit (CDU) (5) therefor are disclosed. The CDU (5) comprises a display screen (50) and a control console having one or more control elements (54, 56). The CDU (5) is adapted to be connected to the camera (3) by means of a controller display interface (32) of the camera (3), and to display video images received from the camera without storing video image data locally to the CDU. The images are preferably displayed substantially immediately by the CDU (5). The camera (3) contains the main image processing hardware, so that the CDU (5) may be highly portable and easily connected to the camera for immediate use. A high-speed digital video camera system (1) and a method of controlling the same are also disclosed.

The Role of Technology in College Sports Training

The Role of Technology in College Sports Training

Technology has revolutionized the way college athletes train and compete in the world of sports. From advanced equipment to cutting-edge data analytics, technology has significantly enhanced the performance, safety, and overall training experience for college athletes. Here are some of the key roles that technology plays in college sports training:1.Performance Tracking: Advanced wearable devices and sensorsallow coaches and trainers to monitor athletes' performance in real-time. These devices can track metrics such as heart rate, speed, acceleration, and even fatigue levels, providing valuable insights into an athlete's physical condition and performanceduring training and competition.2.Video Analysis: High-speed cameras and video analysis softwarehave become essential tools for analyzing and improvingathletes' technique and performance. Coaches can use videofootage to identify areas for improvement, correct form andmechanics, and develop personalized training plans for eachathlete.3.Virtual Reality and Simulation: Virtual reality (VR) technologyis increasingly being used to simulate game scenarios andprovide athletes with immersive training experiences. Thisallows athletes to practice decision-making, reaction time, and situational awareness in a controlled and realistic environment.4.Injury Prevention and Rehabilitation: Technology plays a crucialrole in preventing injuries and expediting the rehabilitationprocess for injured athletes. Tools such as 3D motion analysis systems, force plates, and biometric sensors help identifypotential injury risks and track an athlete's progress duringrecovery.5.Data Analytics: The use of data analytics and machine learningalgorithms has revolutionized the way coaches and trainersanalyze and interpret performance data. By processing largevolumes of data, coaches can gain insights into trends,patterns, and correlations that can be used to optimize training programs and improve overall performance.6.Equipment and Gear Innovation: Advancements in materials scienceand engineering have led to the development of high-performance sports equipment and gear. From lightweight and durable footwear to advanced sports apparel, technology has improved athletes'comfort, safety, and performance on the field.munication and Collaboration: Technology facilitates seamlesscommunication and collaboration between coaches, athletes, and support staff. Whether through mobile apps, online platforms, or video conferencing tools, technology enables real-time feedback, remote coaching, and the exchange of training plans andperformance data.In conclusion, technology has become an integral part of college sports training, enabling athletes and coaches to push the boundaries of performance and safety. As technology continues to evolve, its role in college sports training is only expected to expand, further enhancing the development and success of student-athletes across various sports disciplines.。

高速相机产品介绍英文作文

高速相机产品介绍英文作文

高速相机产品介绍英文作文英文:Hey there! Today, I'd like to introduce you to acutting-edge technology that's been changing the game inall sorts of industries – the high-speed camera. You might be thinking, "What's so special about a high-speed camera?" Well, trust me, these little devices pack a punch and can open up a world of possibilities. So, buckle up and let's dive into what makes them so unique!First and foremost, what really stands out about high-speed cameras is their ability to capture images at an insanely fast frame rate. We're talking thousands or even millions of frames per second, compared to a typical smartphone camera that maxes out at around 60 fps. This super-fast capture rate lets us see things in slow motion that we'd never be able to catch with the naked eye.For instance, imagine you're watching a balloon pop.With a regular camera, all you'd see is a quick burst and the balloon gone in a split second. But with a high-speed camera, you can slow things down to see the balloon stretching and the explosion rippling through the latex before it finally bursts apart. It's mesmerizing and incredibly insightful.Now, you might be wondering, where would I actually use a high-speed camera? These cameras find their place in various fields. In sports, coaches and athletes can use high-speed footage to analyze and improve performance down to the smallest detail. In manufacturing, engineers can catch production line hiccups that could lead to costly errors. And in scientific research, these cameras help us understand complex phenomena like fluid dynamics and chemical reactions.But the applications don't stop there. High-speed cameras are also a staple in the film industry, providing filmmakers with breathtaking slow-motion shots that add drama and intensity to their work. You've probably seen epic action sequences with slow-motion scenes – that'sthanks to high-speed cameras!Now, let's talk about some features that make high-speed cameras a breeze to use. Many models offer adjustable frame rates, so you can slow down or speed up the footage depending on what you need. There's also the option to control resolution, so you can capture ultra-high-definition images or adjust settings for faster recording.One thing to keep in mind, though, is that these cameras can produce massive amounts of data due to the high frame rates and resolutions. So, you'll need to have a storage plan in place, whether that's ample on-board memory or an external device to store all that footage.Another cool aspect of high-speed cameras is their versatility. From pocket-sized models to larger,professional-grade setups, there's something for everyone. Whether you're an amateur filmmaker or a seasoned scientist, you can find a high-speed camera that suits your needs and budget.In conclusion, high-speed cameras are truly a game-changer in various industries. They're the secret weapon behind some of the most captivating footage and groundbreaking discoveries. So, whether you're looking to capture the perfect sports moment, dive deep intoscientific research, or simply play around with cool slow-motion shots, a high-speed camera is the way to go.中文:嗨!今天我想向您介绍一种改变了各种行业游戏规则的先进技术——高速相机。

关于高速监控的作文素材

关于高速监控的作文素材

关于高速监控的作文素材英文回答:High-speed monitoring is a crucial aspect of ensuring road safety and traffic efficiency. It involves utilizing advanced technologies to monitor vehicle speeds and enforce traffic regulations on high-speed roads. By implementing high-speed monitoring systems, law enforcement agencies can effectively deter speeding, reduce accidents, and improve overall road conditions.One of the most common methods of high-speed monitoring is through the use of speed cameras. These cameras are strategically placed along highways and expressways to capture images of vehicles exceeding the posted speed limit. The images captured by these cameras provide clear evidence of speeding violations, allowing authorities to issue citations and impose appropriate penalties.In addition to speed cameras, other technologies arealso used for high-speed monitoring, including radar guns, traffic sensors, and aerial surveillance. Radar guns are handheld devices that emit radio waves to measure the speed of approaching vehicles. Traffic sensors can be embedded in the road surface to detect the presence and speed of vehicles. Aerial surveillance, using helicopters or drones, provides a comprehensive view of traffic conditions and enables law enforcement to identify speeding vehicles from a distance.The implementation of high-speed monitoring systems has resulted in significant improvements in road safety. Studies have shown that the presence of speed cameras and other monitoring technologies leads to a reduction in speeding violations, which in turn reduces the number of accidents and fatalities. Furthermore, high-speed monitoring systems can also help to improve traffic flow by preventing congestion and ensuring a smoother flow of vehicles.High-speed monitoring is an essential tool for ensuring the safety and efficiency of high-speed roads. By deterringspeeding and enforcing traffic regulations, it helps to create a safer driving environment for all road users.中文回答:高速监控对于确保道路安全和交通效率至关重要。

播放机速英语作文

播放机速英语作文

播放机速英语作文With the rapid development of technology, the invention of the phonograph has greatly changed the way we listen to music. The phonograph, also known as a record player, is a device that plays sound recordings by spinning a disc with grooves that represent the sound waves. It was invented by Thomas Edison in 1877 and has since become a popular way to enjoy music.One of the main advantages of a phonograph is itsability to play music at different speeds. This feature allows listeners to control the tempo of the music, making it possible to enjoy a wide range of genres and styles. For example, classical music is often played at a slower speed to enhance its beauty and elegance, while rock and roll music is typically played at a faster speed to create a more energetic and dynamic sound.In addition to speed control, phonographs also offer high-quality sound reproduction. The grooves on the recorddisc contain the audio information, which is read by a stylus as it moves along the grooves. This process results in a clear and crisp sound that is faithful to the original recording. Unlike digital music, which can sometimes lose quality due to compression, phonographs provide a rich and immersive listening experience.Furthermore, phonographs have a timeless appeal that has captivated music lovers for generations. The physical act of placing a record on the turntable, carefully lowering the stylus, and watching it spin creates a sense of anticipation and excitement that cannot be replicated by digital music players. The tactile experience of handling a record and feeling the grooves adds a personal touch to the listening experience, making it more intimate and engaging.Despite the popularity of digital music streaming services, phonographs continue to hold a special place in the hearts of music enthusiasts. Their unique combination of speed control, high-quality sound reproduction, and timeless appeal make them a cherished way to enjoy music. As technology continues to advance, the phonograph remainsa classic and beloved device that will always have a place in the world of music.。

RBRE960 强大的四带WiFi路由器

RBRE960 强大的四带WiFi路由器

Powerful Quad-Band WiFi Router•Fastest Possible Speeds —RBRE960 delivers the fastest WiFi possible at combined speeds up to 10.8Gbps †with coverage up to 3,000 square feet to help everyone in your home accomplish more in a snap.•Access All-new WiFi 6E —Experiencelightening-fast WiFi with the 6GHz “express lane”3dedicated to WiFi 6E devices, including the latest ultra laptops & Samsung Galaxy phones & tablets.•Use Multiple Devices Simultaneously —Enjoy 4K/8K video streaming, Zoom videoconferencing, and WiFi calling for up to 200 devices without dropped connections.•Faster Speeds from Your ISP —10-Gig Internet port unleashes Internet speeds of today and tomorrow –1.4Gbps, 2Gbps, 5Gbps, or 10Gbps Internet.††•High-Speed Wired Connections —Create a super-fast wired connection for your most demanding tech with a 2.5-Gigabit LAN port •Expandable High-performance Mesh —Easily expand WiFi coverage by adding an Orbi 960 satellite (sold separately) to future-proof your router investment.•Voice Command —Control your WiFi using voice commands when you have Amazon Alexa ™or the Google ®Assistant.The Orbi 960 Quad-Band WiFi 6E Mesh-Ready Router, with the WiFi 6E “express lane” and a 10-Gig wired Internet port, delivers ultra-fast speeds and massive device capacity. With combined lightening-fast WiFi speeds of up to 10.8Gbps across up to 3,000 square feet, you'll enjoy the most amazing WiFi experience ever. Protect data, sensitive information, and devices with award-winning NETGEAR Armor™ Internet security.Key FeaturesTechnical SpecificationsServices & Security•Orbi AXE11000 Router(1200+2400+2400+4800Mbps)†•Simultaneous quad-band WiFi -Radio 1: IEEE ®802.11b/g/n/ax 2.4GHz—1024-QAM support-Radio 2: IEEE ®802.11a/n/ac/ax 5GHz —1024-QAM support-Radio 3: IEEE ®802.11a/n/ac/ax 5GHz —1024-QAM support-Radio 4: IEEE ®802.11a/n/ac/ax 6GHz—1024-QAM support 3•MU-MIMO ∞capable for simultaneous data streaming•Implicit & Explicit Beamforming for 2.4GHz, 5GHz & 6GHz bands•Powerful quad-core 2.2GHz processor •Memory -Router -512MB NAND flash and 1GB RAM•Twelve (12) high-performance internal antennas with high-power amplifiers •Ports -One (1) 10Gbps Multi-Gigabit Ethernet WAN port-One (1) 2.5Gbps Multi-Gigabit Ethernet LAN port-Three (3) 10/100/1000Mbps Gigabit Ethernet LAN ports•NETGEAR Armor ™ Powered by Bitdefender ®—Protect your WiFi with a shield of securityacross your PC, phone, camera, TV, Echo, etc.1•NETGEAR Smart Parental Controls ™-Promote healthy Internet habits, foster responsibility and build trust with your kids 2•Orbi App -Easily set up your WiFi System, manage your network remotely, pause the Internet on any device, track your Internet data usage, and more•Standards-based WiFi Security (802.11i, 128-bit AES encryption with PSK)•Guest WiFi network makes it easy to set up separate & secure Internet access for guests960 Series•One (1) Orbi Router (RBRE960)•One (1) 6.6 ft (2m) Ethernet cable •One (1) power adapter •Quick start guide960 SeriesPackage ContentsSystem RequirementsWarranty & Support•High-speed Internet connection to existing modem or gateway•90-day complimentary technical support following purchase from a NETGEAR authorized reseller•/warranty•Dimensions: 11 x 7.5 x 3.3 in (279 mm x 191 mm x 84 mm)•Weight: 3.0 lb (1.36 kg)Physical SpecificationsThis product comes with a limited warranty that is valid only if purchased from a NETGEAR authorized reseller.* 90-day complimentary technical support following purchase from a NETGEAR authorized reseller.†Maximum wireless signal rate derived from IEEE® 802.11 specifications. Actual data throughput and wireless coverage will vary and be lowered by network and environmentalconditions, including network traffic volume, device limitations, and building construction. NETGEAR makes no representations or warranties about this product’s compatibility with future standards. Up to 11,000Gbps wireless speeds achieved when connecting to other 802.11ax 11,000Gbps devices.††Gigabit service plans & compatible cable modem required for Gigabit or Multi-Gig Internet speeds.Wireless speeds are country-specific, as certain WiFi band channels may not be available in some countries.1NETGEAR Armor™ is free during the trial period. A yearly subscription, after the trial period, protects all of your connected devices. Visit /armor 2NETGEAR Smart Parental Controls ™fees apply for a Premium Plan. Visit /spc for more information.36GHz band is limited to indoor usage. Clients must support 6GHz band (WiFi 6E).∞MU-MIMO capability requires both router and client device to support MU-MIMO.For indoor use only.The country settings must be set to the country where the device is operating.For regulatory compliance information, visit /about/regulatory .NETGEAR, the NETGEAR Logo, Orbi, NETGEAR Armor, and NETGEAR Parental Controls are trademarks of NETGEAR, Inc. Any other trademarks mentioned herein are for reference purposes only. ©2022 NETGEAR, Inc.NETGEAR, Inc. 350 E. Plumeria Drive, San Jose, CA 95134-1911 USA, /supportConnection DiagramTS-RBRE960-1。

Skyworks 6G-SDI 高清视频传输解决方案说明书

Skyworks 6G-SDI 高清视频传输解决方案说明书

1 | IntroductionDigital video transmission rates have steadily increased since the introduction of high-definition video. The latest trend in the industry for high-resolution video is the market adoption of 6G-SDI to support 4K digital cinema and ultra-high definition (UHD) television.Digital video data delivery at higher speeds required by 6G-SDI poses new challenges in designing broadcast video production and transmission equipment. In particular, high frequency, low-jitter clocking solutions are a critical element to maintain proper signal integrity through the variouscomponents and interconnecting cables that constitute the high-definition video network. In addition, these timing solutions must be flexible enough to accommodate the multiple frequencies required by legacy video standards.Higher Speed Video Standards on the HorizonThe Society of Motion Picture and Television Engineers (SMPTE) was founded in 1916 to standardize videocontent distribution. Video equipment manufacturers have since adhered to these standards. In 1997, the SMPTE established the SD-SDI 259M standard, which was the first ratified definition of a serial digital interface (SDI) to send and receive uncompressed digital video over 75-ohm coaxial cable within a studio environment. SDI supports transmission rates ranging from 270 Mbps to 360 Mbps.Due to the fact that digitized video signals accumulate jitter across video components andinterconnecting cables, SMPTE established limits on the allowable jitter content of SDI signals. As high-definition digital video advanced to 720p and 1080i, SMPTE defined the HD-SDI 292M standard to support higher bandwidth video transmission at 1.485 GB/s.Addressing Timing Challenges in 6G-SDI Applications | 2In recent years, continued technological innovation in digital video has pushed the boundaries ofvideo resolution from 1080p (2K resolution) to 4K. Transmitting a larger number of pixels on the same infrastructure implies having to deliver the video payload at 5.97 GB/s. The goal of the standards published by the SMPTE has been to guide the increasing data transmission rates to ensure that existing video production facility infrastructure can support and broadcast higher resolution video. Although the SMPTE body has yet to ratify a standard for 6G-SDI, video equipment suppliers are already meeting the demand for 4K by introducing solutions to support the faster data rates.Early 6G/12G-SDI Market AdoptionIn response to surging interest and demand for 4K video components, broadcast video equipment suppliers are starting to release 6G-SDI compatible products such as 4K production switchers, video routers, encoders/decoders, video monitors, video servers and video converters. In addition to enabling transmission of UHD video over standard BNC cable, this equipment supports simplified switching between UHD, HD and SD formats. This enables broadcast video engineers to easily swap between different formats depending on their content production requirements. This flexibility eases the migration to UHD by enabling studios to leverage their existing investment in HD and SD equipment and continue to produce content in a variety of formats.In 2005, the SMPTE introduced 3G-SDI to enable the transmission of 1080p video at 2.97 Gbps over existing 75-ohm coaxial cable. To support these higher video transmission speeds, the SMPTE has set increasingly stringent jitter requirements. Table 1 summarizes the timing requirements for the SMPTE-ratified SDI standards.Table 1. SMPTE SDI Timing RequirementsSemiconductor manufacturers have followed suit and have started to release ICs that are purpose-built for 6G-SDI. These devices include cable equalizers, cable drivers, reclockers and FPGAs with integrated SDI transceivers. One limitation with 6G-SDI is that 4K UHD can be transmitted at no more than 30 frames per second. Higher data rates are required to transmit video at higher frame rates.. Anticipating this demand, manufacturers are starting to release 12G-SDI components that support data rates of 11.8 GB/s.Productization of these “proprietary” solutions increases the risk of future interoperability concerns. For this reason, the SMPTE has assembled a new Working Group to define UHD single-link, dual-link and quad-link electrical and optical SDI interfaces with nominal link rates of 6 Gb/s, 12 Gb/s and 24 Gb/s to support next generation multi-media, high-frame-rate data transmission.Timing Challenges for 6G-SDIUHD video transmission creates many new hardware design challenges. There are three key timing-related design challenges affecting 6G-SDI applications.JitterExcessive jitter increases the bit-error rate of the serial transmission link, degrading the quality ofthe video signal and potentially leading to corrupted, unrecoverable video data. Each componentin the video signal chain, including cables, BNC connectors, printed circuit board traces, equalizers, reclockers, encoders/decoders, crosspoint switches and SDI transceivers, consumes a portion of the overall jitter budget.Consider a signal chain that includes a coaxial cable, equalizer and reclocker. Typically, digital videois transmitted as an 800 mV binary digital signal on a coaxial cable. Signal losses increase with frequency on coaxial cables. To adjust for cable losses, equalizers are used to restore the original amplitude of the signal. Although equalizers are needed to maintain signal quality, they can also add jitter.To mitigate the accumulation of jitter caused by the equalizer, reclockers are used to recover the clock from the digital video signal using a clock and data recovery circuit (CDR). A common implementation is to use a Voltage-Controlled Oscillator (VCXO) to synchronize to the recovered clock, filter unwanted jitter and retime the output data signal. For 6G-SDI reclockers, it is critical to use low-jitter oscillators for reference timing. Starting with a low-noise reference clock enables more jitter budget to be allocated to the other components, potentially reducing their cost and complexity while simplifying PCB design.3 | | 4Figure 1. Example of SDI Application with Jitter-Attenuating High-Precision ClockGenclock (Timing Synchronization)In a studio environment, all video sources are synchronized to a common reference signal from a master sync generator. This synchronization simplifies downstream video processing and switching while reducing the amount of buffering required. This process is known as “Genlocking the video equipment.” Typically a clock generator with a low-bandwidth PLL (typically <10 Hz) is used to provide synchronous timing to the SDI serializer.With SDI data rates increasing to 6G and beyond, it is critical that the Genlock timing generator provide a low-jitter reference clock. A rule of thumb is to select a Genlock clock that consumes no more than 20 percent of the overall timing alignment jitter budget.Frequency FlexibilityHD-SDI and 3G-SDI equipment require timing ICs to generate a combination of frequencies tosupport a myriad of different HDTV video formats and frame rates. Typical frequencies include 74.25 MHz, 74.25/1.001 MHz, 148.5 MHz and 148.5/1.001 MHz. This reference signal is internally multiplied within the SDI transmitter by a factor of 10 or 20 to generate the 1.485 GB/s or 2.97 GB/s signal.Higher reference clock frequencies including 297 MHz and 297/1.001 MHz are used in proprietary 6G-SDI applications today. Using a higher frequency oscillator reference is superior to performing the additional clock multiplication within the SDI transmitter’s clock multiplier unit (CMU) because discrete high performance timing devices have lower jitter than integrated PLLs used in SDI transmitters.5 | SummaryContinued innovation in 4K digital cinema and ultra-high definition (UHD) television increases the need for low-jitter, high-performance timing solutions. Frequency-flexible, low-jitter clocks and oscillators play a critical role in enabling the market transition to 6 Gb/s and faster data rates. Skyworks offers a comprehensive portfolio of frequency-flexible, low-jitter XO/VCXOs, CMEMS oscillators, clock generators, clock buffers and jitter-attenuating clocks.Learn more about timing solutions from Skyworks at /en/Products/TimingFigure 1. Example of SDI Application with Jitter-Attenuating High-Precision Clock6G-SDI ReclockerLow-Jitter, Frequency-Flexible Clocking Solutions for 6GSDI/12G-SDIThe ideal clocking solutions for 6G-SDI and 12G-SDI digital video signal transmission are optimized for ultra-low jitter operation, Genlock video synchronization and frequency flexibility. In addition, the clocking solutions must be backward-compatible with 3G-SDI and other legacy video standards. The example below shows the Skyworks Si552 dual frequency VCXO being used as a frequency reference for a 6G-SDI reclocker. Simple pin strapping is used to select between the 297 MHz and 297/1.001 MHz rates. Supporting jitter performance of 6.6 ps pk-pk, the Si552 VCXO maximizes jitter margin and minimizes risk in the design by shifting more jitter budget to the cable equalizer and cable driver.An alternate solution to using a VCXO is to use a complete PLL solution. The Si5324 jitter-attenuating clock is an ideal solution for 6G-SDI Genlock clock generation. The Si5324 clockgenerates any output frequency from any input frequency without the need for discrete VCXOs. The frequency configuration of the device is fully programmable through I2C/SPI without the need for external BOM changes. The device has a fully integrated low-pass filter that attenuates >4 Hz timing and alignment jitter, enabling jitter filtering and timing synchronization. The Si5324 supports 5 ps pk-pk jitter performance, well below the requirements of 6G-SDI transmitters.Skyworks | Nasdaq: SWKS | |*********************USA: 949-231-3000 | Asia: 886-2-2735 0399 | Europe: 33 (0)1 43548540Copyright ©2022 Skyworks Solutions, Inc. All Rights Reserved.Skyworks, the Skyworks logo and others are trademarks or registered trademarks of Skyworks Solutions, Inc. or its subsidiaries in the United States and other countries. Third-party brands and names are for identification purposes only and are the property of their respective owners.。

高速相机说明书

高速相机说明书
8GB (standard: 5,455 frames @ maximum resolution) 16GB (option: 10,916 frames @ maximum resolution) 32GB (option: 21,839 frames @ maximum resolution) 64GB (option: 43,684 frames @ maximum resolution)
5,455 5,455 5,455 5,455 5,455 5,455 5,455 5,586 6,650 10,912 18,320 24,508 31,931 42,333 84,669 199,580
10,916 10,916 10,916 10,916 10,916 10,916 10,916 11,178 13,308 21,835 36,656 49,036 63,887 84,700 169,402 399,309
Target applications include:
• Materials Science • Combustion Research • Fluid dynamics (PIV) • Ballistic Imaging
• Defense and aerospace research • Plasma and arc studies • Shock Waves and Detonation
Interchangeable Nikon F-mount(compatible with Nikon G type lenses), C-mount using supplied adapters. Optional Canon EF remote control mount
Selectable in twenty steps (0 to 99% in 5% increments) to prevent pixel over-exposure
相关主题
  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

High-Speed Videography Using a Dense Camera Array Bennett Wilburn∗Neel Joshi†Vaibhav Vaish†Marc Levoy†Mark Horowitz∗∗Department of Electrical Engineering†Department of Computer ScienceStanford University,Stanford,CA94305AbstractWe demonstrate a system for capturing multi-thousand frame-per-second(fps)video using a dense array of cheap 30fps CMOS image sensors.A benefit of using a camera array to capture high-speed video is that we can scale to higher speeds by simply adding more cameras.Even at ex-tremely high frame rates,our array architecture supports continuous streaming to disk from all of the cameras.This allows us to record unpredictable events,in which nothing occurs before the event of interest that could be used to trig-ger the beginning of recording.Synthesizing one high-speed video sequence using im-ages from an array of cameras requires methods to cali-brate and correct those cameras’varying radiometric and geometric properties.We assume that our scene is either relatively planar or is very far away from the camera and that the images can therefore be aligned using projective transforms.We analyze the errors from this assumption and present methods to make them less visually objectionable. We also present a new method to automatically color match our sensors.Finally,we demonstrate how to compensate for spatial and temporal distortions caused by the electronic rolling shutter,a common feature of low-end CMOS sen-sors.1.IntroductionAs semiconductor technology advances,capturing and pro-cessing video from many cameras becomes increasingly easy and inexpensive.It therefore makes sense to ask what we can accomplish with many cameras and plentiful pro-cessing.To answer this question,we have built a custom array of over one hundred inexpensive CMOS image sen-sors,essentially a gigasample per second photometer.We are free to allocate those samples in many ways—abutting the cameras’fields of view for increased resolution,viewing the same regions with varying exposure times to increase dynamic range,and so on.In this paper,we explore dis-tributing the samples in time to simulate a single high-speed camera.Creating a single high-speed camera from our array re-quires a combination offine control over the cameras and compensation for varying geometric and radiometric prop-erties characteristic of cheap image sensors.We show that we can geometrically align our images with2D homogra-phies and present ways to minimize objectionable artifacts due to alignment errors.To achieve good color matching between cameras,we use a two-step process that iteratively configures the sensors tofit a desired linear response over the range of intensities in our scene,then characterizes and corrects the sensor outputs in postprocessing.Another char-acteristic of inexpensive CMOS sensors is the electronic rolling shutter,which causes distortions for fast moving ob-jects.We show that rolling shutter images are diagonal planes in the spatiotemporal volume.Slicing the volume of rolling shutter images along vertical planes of constant time eliminates the distortions.We also explore ways to extend performance by taking advantage of the unique fea-tures of multiple camera sensors—parallel compression for very long recordings,and exposure windows that span mul-tiple high-speed frame times for increasing the frame rate or signal-to-noise ratio.2.Previous WorkHigh-speed imaging is used to analyze automotive crash tests,golf swings,explosions,and more.Industrial,re-search,and military applications have motivated increas-ingly faster high-speed cameras.Currently,off-the-shelf cameras from companies like Photron and Vision Research can record800x600pixels at4800fps,or2.3gigasamples per second.These devices use a single image sensor and are typically limited to storing just a few seconds of data be-cause of the huge bandwidths involved in high-speed video. The short recording duration means that acquisition must be synchronized with the event of interest.Our system cap-tures and compresses data from many cameras in parallel, allowing us to stream for minutes and eliminating the need for triggers.To our knowledge,little work has been done generating high-speed video from multiple cameras running at video frame rates,although several groups have demonstrated the utility of large camera arrays.Virtualized Reality TM[1]cap-tures video from49synchronized,color,off-the-shelf S-Video cameras for for3D reconstruction and virtual navi-gation through dynamic scenes.Yang et al.built a real-time distributed lightfield camera[2]from an8x8grid of commodity webcams for real-time lightfield rendering of dynamic scenes.Their system produces one video stream’s worth of data,although this stream can be assembled from multiple camera inputs in real time.With our moreflexible array,we can explore ways to extend camera performance other than view interpolation.The prior work closest to ours is the paper by Shechtman, et al.on increasing the spatio-temporal resolution of video from multiple cameras[3].They acquire video at regular frame rates with motion blur and aliasing,then synthesize a high-speed video.Our method,with better timing control and more cameras,eliminates the need for this sophisticated processing,although we will show that we can leverage this work to extend the range of the system.3.High-Speed Videography Using AnArray of CamerasIn this section,we present an overview of our camera ar-ray hardware and the features which are critical for this ap-plication.We then discuss the issues in synthesizing one high-speed video stream from many cameras.Specifically, our cameras have slightly different centers of projections, and vary in focal length,orientation,color response,and so on.They must be calibrated relative to each other and their images corrected and aligned in order to form a visually ac-ceptable video sequence.3.1.The Multiple Camera ArrayOur100camera array is based on the prototype six-camera architecture described in[4].This work and that of[5]are thefirst applications demonstrating thefinal system.The cameras use CMOS image sensors,MPEG compression, IEEE1394,and a simple means for distributing a clock and trigger signals to the entire array.Each camera has a pro-cessing board that manages the compression and IEEE1394 interface,and a separate small board that contains the im-age sensor.We use Omnivision OV8610sensors to capture 640x480pixel,Bayer mosaic color images at30fps.The array can take up to twenty synchronized,sequential snapshots from all of the cameras at once.The images are stored locally in memory at each camera,limiting us to only 2/3s of ing MPEG compression at each camera, we can capture essentially indefinitely.MPEG compresses 9MB/s of raw video to4Mb/s streams,reducing the total video bandwidth of our52camera array from457MB/s to a more manageable26MB/s.The resulting compression ratio is18:1,which is considered mild for MPEG.We require just one PC per26cameras to capture the compressedvideo.Figure1:An array of52cameras for capturing high-speed video.The cameras are packed closely together to approxi-mate a single center of projection.Each camera’s exposure duration can be set in incre-ments of205µs down to a minimum of205µs,or four mon clock and trigger signals are distributed via CAT5cables to the entire array.Unlike the prototype,our new cameras are not only frequency-locked but can also be arbitrarily phase-shifted with respect to the trigger signal. The camera timing is accurate to within200ns across the entire array,or less than one tenth of a percent of our cam-eras’minimum exposure time.As we will show,this precise control is critical to our high-speed video application.To approximate a camera with a single center of pro-jection,we would like our cameras to be packed as close together as possible.The array was designed with tight packing in mind.As noted earlier,the image sensors are on separate small boards.For work in this paper,we mounted them on a sheet of laser cut plastic with holes for up to a 12x12grid of cameras.Each camera board is attached to the mount by three spring-loaded screws that can be turned tofine tune its orientation.Figure1shows the assembly of 52cameras used for these experiments.3.2.High-Speed Videography From Inter-leaved ExposuresUsing n cameras running at a given frame rate s,we create high-speed video with an effective frame rate of h=n∗s by staggering the start of each camera’s exposure window by1/h and interleaving the captured frames in chronolog-ical ing52cameras,we have s=30,n=52,and h=1560fps.Unlike a single camera,we have greatflexibil-ity in choosing exposure times.We typically set the expo-sure time of each camera to be1/h or less,or1/1560sec. Such short exposure times are often light limited,creatinga trade-off between acquiring more light(to improve the signal-to-noise ratio)using longer exposures,and reducing motion blur with shorter exposures.Because we use multi-ple cameras,we have the option of extending our exposure times past1/h to gather more light and using temporal su-perresolution techniques to compute high-speed video.We will return to these ideas later.3.3.Geometric AlignmentTo create a single high-speed video sequence,we must align the images from our52cameras to a reference view.Since they have different centers of projection,this is in general a difficult task,so we make the simplifying assumption that our scene lies within a shallow depth of a single object plane.In that case,we can use a simple projective transfor-mation to align the images.Of course,this condition holds only for scenes that are either relativelyflat or sufficiently far from the array relative to the camera spacing.We de-termine the2D homography to align the images by taking pictures of a planar calibration target placed at the object plane.We pick one of the central cameras to be the refer-ence view,then use point correspondences between features from that view and the others to compute a homography for each of the other cameras.This transformation effectively rectifies all of the cameras’views to a common plane,then translates them such that objects on that plane are aligned.Figure2shows the alignment error as objects stray from the object plane.In this analysis,(although not in our cali-bration procedure),we assume that our cameras are located on a plane,their optical axes are perpendicular to that plane, their image plane axes are parallel,and their focal lengths f are the same.For two cameras separated by a distance a, an object at a distance s will see a disparity of d=fa/s between the two images(assuming the standard perspective camera model).Our computed homographies will account for exactly that shift when registering the two views.If the object were actually at distance s instead of s,then the re-sulting disparity should be d =fa/s .The difference be-tween these two disparities is our error(in metric units,not pixels)at the image plane.Equating the maximum tolerable error c to the difference between d and d ,and solving for s yields the equations =s1−scfaEvaluating this for positive and negative maximum er-rors gives our near and far effective focal limits.This is the same equation used to calculate the focal depth limits for a pinhole camera with afinite aperture[6].In this instance, our aperture is the area spanned by our camera locations. Rather than becoming blurry,objects off the focal plane re-main sharp but appear to move around from frame to frame in the aligned images.2nd cameraReference cameraFigure2:Using a projective transform to align our images causes errors for objects off the assumed plane.The solid lines from the gray ball to each camera show where it ap-pears in each view with no errors.The dashed line shows how the alignment incorrectly projects the image of the ball in the second camera to an assumed object plane,making the ball appear to jitter spatially when frames from the two cameras are temporally interleaved.For our lab setup,the object plane is3m from our cam-eras,the camera pitch is33mm,and the maximum separa-tion between any two of the52cameras is251mm.The image sensors have a6mm focal length and a pixel size of 6.2µm.Choosing a maximum tolerable error of+/-one pixel,we get near and far focal depth limits of2.963and 3.036m,respectively,for a total depth offield of7.3cm. Note that these numbers are a consequence offilming in a confined laboratory.For many high-speed video appli-cations,the objects of interest are sufficiently far away to allow much higher effective depths offield.The false motion of off-plane objects can be rendered much less visually objectionable by ensuring that sequential cameras in time are spatially adjacent in the camera mount. This constrains the maximum distance between cameras from one view in thefinal high-speed sequence to the next to only47mm and ensures that the apparent motion of mis-aligned objects is smooth and continuous.If we allow the alignment error to vary by a maximum of one pixel from one view to the next,our effective depth offield increases to40cm.Figure3shows thefiring order we use for our52 camera setup.Figure3:Ourfiring order for our52camera array.Ensuring that sequential cameras in the trigger sequence are spatially adjacent in the array makes frame-to-frame false motion of off-plane objects small,continuous and less objectionable.3.4.Radiometric CalibrationVariations in the radiometric properties of our cameras will cause color differences between interleaved frames in our high-speed videos.Our inexpensive cameras have widely different default intensity and color responses,and unreli-able automatic white balance and autogain functions.We have implemented a new,automatic method to configure and calibrate our cameras using images of a Macbeth color checker chart.Wefirst adjust the sensor gains and offsets so their outputs for the six grayscale patches on the chart bestfit a line that maps the brightest and darkest squares to RGB values of(220,220,220)and(20,20,20),respectively. This simultaneously white balances our images and maxi-mizes the usable data in each color channel for each camera. Wefit to a range of20-220because our sensors are nonlin-ear near the limits of their output range(16-240).A sec-ond post-processing step generates lookup tables to correct nonlinearities in each sensor’s response and then determines 3x3correction matrices to best match,in the least squares sense,each camera’s output to the mean values from all of the sensors.A more thorough treatment of our color cal-ibration can be found in[7].At the moment we are not correcting for cos4falloff or vignetting.4.Overcoming the electronic rollingshutterFor image sensors that have a global,“snapshot”shutter, such as an interline transfer CCD,the method we have de-scribed would be complete.Unfortunately,the image sen-sors in our array use an electronic rolling shutter.A snap-shot shutter starts and stops light integration for every pixel in the sensor at the same times.Readout is sequentialby(a)(b)Figure4:The electronic rolling shutter.Many low-end im-age sensors use an electronic rolling shutter,analogous to an open slit that scans over the image.Each row integrates light only while the slit passes over it.(a)An example of an object moving rapidly to the right while the rolling shutter scans down the image plane.(b)In the resulting image,the shape of the moving object is distorted.scanline,requiring a sample and hold circuit at each pixel to preserve the value from the time integration ends until it can be read out.An electronic rolling shutter,on the other hand,exposes each row just before it is read out.Rolling shutters are attractive because they do not require the ex-tra sample and hold circuitry at each pixel,making the cir-cuit design simpler and increasing thefill factor(the portion of each pixel’s area dedicated to collecting light).A quick survey of Omnivision,Micron,Agilent,Hynix and Kodak reveals that all of their color,VGA(640x480)resolution, 30fps CMOS sensors use electronic rolling shutters.This disadvantage of the rolling shutter,illustrated infig-ure4,is that it distorts the shape of fast moving objects, much like the focal plane shutter in a35mm SLR cam-era.Since scanlines are read out sequentially over the33ms frame time,pixels lower in the image start and stop integrat-ing incoming light nearly a frame later than pixels from the top of the image.Figure5shows how we remove the rolling shutter dis-tortion.The camera triggers are evenly staggered,so at any time they are imaging different regions of the object plane. Instead of interleaving the aligned images,we take scan-lines that were captured at the same time by different cam-eras and stack them into one image.One way to view this stacking is in terms of a spatiotem-poral volume,shown infigure6.Images from cameras with global shutters are vertical slices(along planes of constant time)of the spatiotemporal volume.Images from rolling shutter cameras,on the other hand,are diagonal slices in the spatiotemporal volume.The scanline stacking we just described is equivalent to slicing the volume of rolling shut-ter images along planes of constant time.We use trilinear interpolation between frames to create the images.The slic-Figure5:Correcting the electronic rolling shutter distortion.The images on the left represent views fromfive cameras with staggered shutters.At anytime,different rows(shown in gray)in each camera are imaging the object plane.By stacking these rows into one image,we create a view with no distortion.(a)(b)Figure7:“Slicing”rolling shutter videos to eliminate dis-tortions.(a)An aligned image from one view in the fansequence.Note the distorted,non-uniform appearance ofthe fan blades.(b)“Slicing”the stacked,aligned frames sothat rows in thefinal images are acquired at the same timeeliminates rolling shutter artifacts.The moving blades areno longer distorted.ing results in smooth,undistorted images.Figure7showsa comparison of frames from sliced and unsliced videos ofa rotating fan.The videos werefilmed with the52camerasetup,using the trigger ordering infigure3.The spatiotemporal analysis so far neglects the interac-tion between the rolling shutter and our image alignments.Vertical components in the alignment transformations raiseor lower images in the spatiotemporal volume.Asfigure8shows,such displacements also shift rolling shutter imageslater or earlier in time.By altering the trigger timing of eachcamera to cancel this displacement,we can restore the de-sired evenly staggered timing of the images.Another wayto think of this is that a vertical alignment shift of x rowsimplies that features in the object plane are imaged not onlyx rows lower in the camera’s view,but also x row timeslater because of the rolling shutter.A row time is the timeit takes the shutter to scan down one row of pixels.Trigger-ing the camera x row times earlier exactly cancels this delayand restores the intended timing.Note that pure horizontaltranslations of rolling shutter images in the spatiotemporalvolume do not alter their timing,but projections that causescale changes,rotations or keystoning alter the timing inways that cannot be corrected with only a temporal shift.We aim our cameras straight forward so their sensorsplanes are as parallel as possible,making their alignmenttransformations as close as possible to pure translations.We compute the homographies mapping each camera to thereference view,determine the vertical components of thealignments at the center of the image,and subtract the cor-responding time displacements from the cameras’triggertimes.As we have noted,variations in the focal lengths andorientations of the cameras prevent the homographies frombeing strictly translations,causing residual timing errors.Inpractice,for the regions of interest in our videos(usually thecenter third of the images)the maximum error is typicallyunder two row times.At1560fps,the frames are twelve rowtimes apart.The timing offset error caused by the rolling shut-ter is much easier to see in a video than in a se-quence of still frames.The following example andall other videos in this paper are available online at/papers/highspeedarray/.Thevideo fan even.mpg shows a fanfilmed at1560fps usingour52camera setup and evenly staggered trigger times.The fan appears to speed up and slow down,although itsreal velocity is constant.Note that the effect of the tim-ing offsets is lessened by our sampling order—neighboringcameras have similar alignment transformations,so we donot see radical changes in the temporal offset of each im-age.Fan shifted.mpg is the result of shifting the triggertimings to compensate for the alignment translations.Thefan’s motion is now smooth,but the usual artifacts of therolling shutter are still evident in the misshapen fan blades.Fan shifted sliced.mpg shows how slicing the video fromthe retimed cameras removes the remaining distortions.5.ResultsFilming a rotating fan is easy because no trigger is neededand the fan itself is nearly planar.In this section we presenta more interesting acquisition:1560fps video of balloonspopping,several seconds apart.Because our array canstream at high speed,we did not need to explicitly syn-chronize video capture with the popping of the balloons.(a)(b)Figure6:Slicing the spatiotemporal volume to correct rolling shutter distortion.(a)Cameras with global shutters capture their entire image at the same time,so each one is a vertical slice in the volume.(b)Cameras with rolling shutters capture lower rows in their images later in time,so each frame lies on a slanted plane in the volume.Slicing rolling shutter videoalong planes of constant time in the spatiotemporal volume removes thedistortion.(a)(b)Figure8:Alignment of rolling shutter images in the spatiotemporal volume.(a)Vertically translating rolling shutter images displaces them toward planes occupied by earlier or later frames.This is effectively a temporal offset in the image.(b) Translating the image in time by altering the camera shutter timing corrects the offset.The image is translated along its original spatiotemporal plane.In fact,when wefilmed we let the video capture run while we walked into the center of the room,popped two balloons one at a time,and then walked back to turn off the record-ing.This video is also more colorful than the fan sequence, thereby exercising our color calibration.Figure9shows frames of one of the balloons popping. We have aligned the images but not yet sliced them to cor-rect rolling shutter-induced distortion.Although we strike the top of the balloon with a tack,it appears to pop from the bottom.In the video(balloon1distorted.mpg on the web site),one can also see the artificial motion of our shoulders, which are in front of the object focal plane.Because of our camera ordering and tight packing,this motion,although incorrect,is relatively unobjectionable.Figure10shows the second balloon in the sequence pop-ping.The full video online is balloon2sliced.mpg.Slicing the stacked balloon images removes the rolling shutter dis-tortion,and the balloon correctly appears to pop from where it is punctured by the pin.This slicingfixes the rolling shut-ter distortions but makes alignment errors and color varia-tions more objectionable.Before slicing,the alignment er-ror for objects off the focal plane was constant for a given depth and varied somewhat smoothly from frame to frame. After slicing,off-plane objects,especially the background, appear distorted because their alignment error varies with their vertical position in the image.This distortion pat-tern scrolls down the image as the video plays and becomes more obvious.Before slicing,the color variation of each camera was also confined to a single image in thefinal high-speed sequence.These short-lived variations were then av-eraged by our eyes over several frames.Once we slice the images,the color offsets of the images also create a slid-ing pattern in the video.Note that some color variations, especially for specular objects,are unavoidable for a multi-Figure 9:1560fps video of a popping balloon with rolling shutter distortions.The balloon is struck at the top by the tack,but it appears to pop from the bottom.The top of the balloon seems to disappear.camera system.The reader is encouraged to view the online videos to appreciate these effects.The unsliced video of the second balloon popping,balloon2distorted.mpg is pro-vided for comparison,as well as a continuous video show-ing both balloons,balloons.mpg.6.Discussion and Future WorkWe have demonstrated a method for acquiring very high-speed video using a densely packed array of lower frame rate cameras with precisely timed exposure windows.The system scales to higher frames rates by simply adding more cameras.Our parallel capture and compression architecture lets us stream essentially indefinitely and requires no trig-gers,a feature we have not found in any commercially avail-able off-the-shelf high-speed camera.Inaccuracies correct-ing the the temporal offset caused by aligning our rolling shutter images are roughly one sixth of our frame time and limit the scalability of our array.A more fundamental limit to the scalability of the system is the minimum integration time of the camera.At 1560fps capture,the exposure time for our cameras is three times the minimum value.If we scale beyond three times the current frame rate,the expo-sure windows of our cameras will begin to overlap,and our temporal resolution will no longer match our frame rate.The possibility of overlapping exposure intervals is a unique feature of our system—no single camera can ex-pose for longer than the time between frames.If we can use temporal superresolution techniques to recover high-speed images from cameras with overlapping exposures,we could scale the frame rate even higher than the inverse of the minimum exposure time.As exposure times decrease at very high frame rates,image sensors become light lim-ited.Typically,high-speed cameras solve this by increas-ing the size of their pixels.Applying temporal superreso-lution to overlapped high-speed exposures is anotherpossi-(a)(b)(c)Figure 11:Overlapped exposures with temporal superres-olution.(a)Fan blades filmed with an exposure window four high-speed frames long.(b)Temporal superresolution yields a sharper,less noisy image.Note that sharp features like the specular highlights and stationary edges are pre-served.(c)A contrast enhanced image of the fan filmed un-der the same lighting with an exposure window one fourth as long.Note the highly noisy image.ble way to increase the signal-to-noise ratio of a high-speed multi-camera system.To see if these ideas show promise,we applied the temporal superresolution method presented by Shechtman [3]to video of a fan filmed with an expo-sure window that spanned four high-speed frame times.We omitted the temporal alignment process because we know the convolution that relates high-speed frames to our blurred images.Figure 11shows a comparison between the blurred blade,the results of the temporal superresolution,and the blade captured in the same lighting with a one frame expo-sure window.Encouragingly,the deblurred image becomes sharper and less noisy.Using a large array of low quality image sensors poses challenges for high-speed video.We present methods to automatically color match and geometrically align the im-Figure 10:1560fps video of a popping balloon,corrected to eliminate rolling shutter distortions.ages from our cameras that can be of general use beyond this application.Nearly all low-end CMOS image sensors use electronic rolling shutters that cause spatial and tempo-ral distortions in high-speed videos.We demonstrate how to correct these distortions by retiming the camera shutters and resampling the acquired images.This mostly removes the artifacts at the cost of making radiometric differences be-tween the cameras and alignment errors off the object plane more noticeable.One opportunity for future work is a more sophisticated alignment method,possibly based on optical flow.The false motion induced by misalignments will have a fixed pattern set by the camera arrangement and the depth of the ob-jects,so one could imagine an alignment method that de-tected and corrected that motion.For such a scheme,using a camera trigger ordering that maximized the spatial dis-tance between adjacent cameras in the temporal shutter or-dering would maximize this motion,making it easier to de-tect and segment from the video.One could use this either to aid alignment and increase the effective depth of field or to suppress off-plane objects and create “cross-sectional”high-speed video that depicted only objects at the desired depth.Another avenue of research would be combining high-speed video acquisition with other means of effectively boosting camera performance.For example,one could assemble a high-speed,high dynamic range camera using clusters of cameras with varying neutral density filters and staggered trigger timings,such that at each trigger instant one camera with each density filter was firing.One can imagine creating a camera array for which effective dy-namic range,frame rate,resolution,aperture,number of distinct viewpoints and more could be chosen to optimally fit a given application.We are looking forward to investi-gating these ideas.AcknowledgmentsThe authors would like to thank Augusto Roman and Guil-laume Poncin for help on the system and experiments.This work was supported by DARPA grants F29601-00-2-0085and NBCH-1030009,and NSF grant IIS-0219856-001.References[1]P.Rander,P.Narayanan,and T.Kanade,“Virtual-ized reality:Constructing time-varying virtual worlds from real events,”in Proceedings of IEEE Visualiza-tion ,Phoenix,Arizona,Oct.1997,pp.277–283.[2]J.-C.Yang,M.Everett,C.Buehler,and L.McMillan,“Areal-time distributed light field camera,”in Eurograph-ics Workshop on Rendering ,2002,pp.1–10.[3]E.Shechtman,Y .Caspi,and M.Irani,“Increasingspace-time resolution in video sequences,”in European Conference on Computer Vision (ECCV),May 2002.[4]B.Wilburn,M.Smulski,H.Lee,and M.Horowitz,“The light field video camera,”in Media Processors 2002,ser.Proc.SPIE,S.Panchanathan,V .Bove,and S.Sudharsanan,Eds.,vol.4674,San Jose,USA,Jan-uary 2002,pp.29–36.[5]V .Vaish,B.Wilburn,and M.Levoy,“Using plane +parallax for calibrating dense camera arrays,”in CVPR 2004,2004.[6]R.Kingslake,Optics in Photography .SPIE OpticalEngineering Press,1992.[7]N.Joshi,“Color calibration for arrays of inexpensiveiamge sensors,”Stanford University,Tech.Rep.GET THIS NUMBER,2004,paper in preparation.。

相关文档
最新文档