Adaptive MPEG-2 Video Data Hiding Scheme

合集下载

海康威视DS-2DE4215IW-DE(E)2MP15×网络红外速域摄像头说明书

海康威视DS-2DE4215IW-DE(E)2MP15×网络红外速域摄像头说明书

Hikvision DS-2DE4215IW-DE (E) 2MP 15× Network IR Speed Dome adopts 1/2.8" progressive scan CMOS chip. With the 15× optical zoom lens, the camera offers more details over expansive areas.This series of cameras can be widely used for wide ranges of high-definition, such as the rivers, roads, railways, airports, squares, parks, scenic spots, and venues, etc.Key Features•1/2.8" progressive scan CMOS•Up to 1920 × 1080@30fps resolution •Excellent low-light performance withpowered-by-DarkFighter technology•15× optical zoom, 16× digital zoom •WDR, HLC, BLC, 3D DNR, Defog, EIS, Regional Exposure, Regional Focus•Up to 100 m IR distance•Support H.265+/H.265 video compressionCamera ModuleImage Sensor 1/2.8" progressive scan CMOSMin. Illumination Color: 0.005 Lux @(F1.6, AGC ON) B/W: 0.001Lux @(F1.6, AGC ON) 0 Lux with IRWhite Balance Auto/Manual/ATW (Auto-tracking White Balance)/Indoor/Outdoor/FluorescentLamp/Sodium LampGain Auto/ManualShutter Time 50Hz: 1/1 s to 1/30,000 s60Hz: 1/1 s to 1/30,000 sDay & Night IR Cut FilterDigital Zoom 16×Privacy Mask 24 programmable privacy masks, mask color configurableFocus Mode Auto/Semi-automatic/ManualWDR 120 dB WDRLensFocal Length 5 mm to 75 mm, 15× optical zoomZoom Speed Approx. 2.3 s (optical lens, wide-tele)Field of View Horizontal field of view: 53.8° to 4° (Wide-Tele) Vertical field of view: 31.9° to 2.3° (Wide-Tele) Diagonal field of view: 60.4° to 4.6° (Wide-Tele)Working Distance 10 mm to 1500 mm (wide-tele)Aperture Range F1.6 to F2.8IRIR Distance100 mSmart IR YesPTZMovement Range (Pan) 360° endlessPan Speed Configurable, from 0.1°/s to 80°/s,Preset speed: 80°/sMovement Range (Tilt) From -15° to 90° (auto-flip)Tilt Speed Configurable, from 0.1°/s to 80°/sPreset Speed: 80°/sProportional Zoom YesPresets 300Patrol Scan 8 patrols, up to 32 presets for each patrolPattern Scan 4 pattern scans, record time over 10 minutes for each scanPower-off Memory YesPark Action Preset/Pattern Scan/Patrol Scan/Auto Scan/Tilt Scan/Random Scan/Frame Scan/Panorama Scan3D Positioning YesPTZ Position Display YesPreset Freezing YesScheduled Task Preset/Pattern Scan/Patrol Scan/Auto Scan/Tilt Scan/Random Scan/Frame Scan/Panorama Scan/Dome Reboot/Dome Adjust/Aux OutputCompression StandardVideo Compression Main Stream: H.265+/H.265/H.264+/H.264 Sub-stream: H.265/H.264/MJPEGThird Stream: H.265/H.264/MJPEGH.264 Type Baseline Profile/Main Profile/High ProfileH.264+ YesH.265 Type Baseline Profile/Main Profile/High ProfileH.265+ YesVideo Bitrate 32 Kbps to 16384 KbpsSVC YesSmart FeaturesBasic Event Motion Detection, Video Tampering Detection, ExceptionSmart Event Intrusion Detection, Line Crossing Detection, Region Entrance Detection, Region Exiting Detection, Object Removal Detection, Unattended Baggage DetectionSmart Record ANR (Automatic Network Replenishment), Dual-VCAROI Main stream, sub-stream, and third stream respectively support four fixed areas. ImageMax. Resolution 1920 × 1080Main Stream 50Hz: 25fps (1920 × 1080, 1280 × 960, 1280 × 720) 50fps (1280 × 960, 1280 × 720)60Hz: 30fps (1920 × 1080, 1280 × 960, 1280 × 720) 60fps (1280 × 960, 1280 × 720)Sub-Stream 50Hz: 25fps (704 × 576, 640 × 480, 352 × 288)60Hz: 30fps (704 × 480, 640 × 480, 352 × 240)Third Stream 50Hz: 25fps (1920 × 1080, 1280 × 960, 1280 × 720, 704 × 576, 640 × 480, 352 × 288)60Hz: 30fps (1920 × 1080, 1280 × 960, 1280 × 720, 704 × 480, 640 × 480, 352 × 240) Image Enhancement HLC/BLC/3D DNR/Defog/EIS/Regional Exposure/Regional FocusNetworkNetwork Storage Built-in memory card slot, support Micro SD/SDHC/SDXC, up to 256 GB; NAS (NFS, SMB/ CIFS), ANRProtocols IPv4/IPv6, HTTP, HTTPS, 802.1x, Qos, FTP, SMTP, UPnP, SNMP, DNS, DDNS, NTP, RTSP,RTCP, RTP, TCP/IP, UDP, IGMP, ICMP, DHCP, PPPoE, BonjourAPI ONVIF (Profile S, Profile G, Profile T), ISAPI, SDKSimultaneous Live View Up to 20 channelsUser/Host Up to 32 users3 levels: Administrator, Operator and UserSecurity Measures User authentication (ID and PW), Host authentication (MAC address); HTTPS encryption;IEEE 802.1x port-based network access control; IP address filteringClient iVMS-4200, iVMS-4500, iVMS-5200, Hik-ConnectWeb Browser IE 8 to 11, Chrome 31.0+, Firefox 30.0+, Edge 16.16299+InterfaceNetwork Interface 1 RJ45 10 M/100 M Ethernet, PoE (802.3 at, class4)GeneralLanguage (Web Browser Access ) 32 languages.English, Russian, Estonian, Bulgarian, Hungarian, Greek, German, Italian, Czech, Slovak, French, Polish, Dutch, Portuguese, Spanish, Romanian, Danish, Swedish, Norwegian, Finnish, Croatian, Slovenian, Serbian, Turkish, Korean, Traditional Chinese, Thai, Vietnamese, Japanese, Latvian, Lithuanian, Portuguese (Brazil)Power 12 VDC, 2.0A, PoE (802.3at), 42.5 to 57 VDC, 0.6A, Class4Max.18 W, including Max.7W for IRWorking Temperature -30°C to 65°C (-22°F to 149°F)Working Humidity ≤ 90%Protection Level IP66 Standard, 4000V Lightning Protection, Surge Protection and Voltage Transient ProtectionMaterial ADC 12, PC, PC+10% GFDimensions Φ 164.5 mm × 290 mm (Φ 6.48" × 11.42")Weight Approx. 2 kg (4.41 lb)DORIThe DORI (detect, observe, recognize, identify) distance gives the general idea of the camera ability to distinguish persons or objects within its field of view.DORI Detect Observe Recognize Identify Definition 25 px/m63 px/m125 px/m 250 px/m Distance (Tele)960.0 m (3149.6 ft)381.0 m (1249.8 ft)192.0 m (629.9 ft)96.0 m (315.0 ft)Available ModelDS-2DE4215IW-DE (E), 12 VDC & PoE (802.3 at, class4)DimensionsUnit: mm29Φ164.5AccessoryIncluded:ASW0081-1220002W Power adapter DS-1618ZJ Wall MountOptional:DS-1602ZJ Wall MountDS-1604ZJ-box Wall mount with junctionboxDS-1604ZJWall mount with junction boxDS-1604ZJ-BOX-CORNER Wall Mount with junction boxDS-1604ZJ-pole Vertical Pole MountDS-1604ZJ-BOX-POLEVertical Pole Mount withjunction boxDS-1604ZJ-cornerCorner MountDS-1661ZJPendant MountDS-1662ZJ Pendant MountDS-1663ZJCeiling MountDS-1671ZJ-SDM9In-ceiling mountDS-1681ZJInstallation AdapterDS-1619ZJ Gooseneck MountDS-1660ZJParapet wall mountDS-1667ZJExtendable Pole for PendantMountDS-1682ZJExtendable Pole for PendantMountDS-1673ZJ Horizontal Pole MountDS-1100KINetwork KeyboardLAS60-57CN-RJ45Hi-PoE midspanDS-1005KIUSB Joy-stick*DS-1673ZJ should be used with DS-1661ZJ or DS-1602ZJ.。

活力(ACTI)E936 2MP 视频分析型室外мини域眼镜摄像头说明书

活力(ACTI)E936 2MP 视频分析型室外мини域眼镜摄像头说明书
Surface, Wall, Pendant, Gang box -20°C ~ 50°C (-4°F ~ 122°F) -20°C ~ 50°C (-4°F ~ 122°F) 10% ~ 85% RH
CE (EN 55022 Class B, EN 55024), FCC (Part15 Subpart B Class B), IK10, IP68, NEMA 4X, EN50155
Dome Cover
PDCX-1111
2-inch, smoke, vandal proof (IK10)
Popular Mounting Solutsories not required
Power Supply
Wall
PMAX-0316
PPOE-0001
IEEE 802.3af PoE Injector for Class 1, 2 or
3 devices, with universal adapter
Pendant
PMAX-0111
PMAX-1400
+
Gang Box PMAX-0805
NPT
PMAX-0809
Standard PMAX-1400
+
Mounts
Unit: mm [inch]

* Latest product information: /products/ * Accessory information: /mountingselector
• Alarm
Alarm Trigger
Alarm Response
• Interface
Local Storage
• General
Power Source / Consumption Weight Dimensions (Ø x H) Environmental Casing Mount Type Starting Temperature Operating Temperature Operating Humidity Approvals

HD-TVI 2MP Intensifier T Indoor Mini-Board摄像头说明书

HD-TVI 2MP Intensifier T Indoor Mini-Board摄像头说明书

DIMENSIONSFEATURES Unit : inchAccessoriesSPECIFICATIONSImage Sensor.................. 1/3” Progressive Scan CMOS, 2MP Minimum Illumination....... 0.001 lux (INTENSIFY ‘On’)Effective Pixels................ 1984 (H) x 1105 (V) Total Pixels...................... 2000 (H) x 1121 (V) Scanning System............ Progressive S/N Ratio......................... More than 50dB Output Resolution............ 1920x1080 @ 30fps Video Output.................... 1.0Vp-p / 75 OhmsElectronic Shutter............ Auto / Manual (1/30 – 1/30,000 sec.) Day / Night....................... Auto / Color / BW / External Test Monitor Output......... CVBS (BNC out in Yellow)Communication............... UTC (up the coax)Power............................. 12VDC (power supply not included)Power Consumption........ 12VDC, 200mA max. Operating Temperature....14° F – 120° FUnit Dimensions.............. 1.65” (W) x 1.86” (H) x 2.2” (L) Unit Weight...................... 7 oz.Certifications.................... FCC, RoHS Signal Distance............... Up to 1600 feet(depending on coax quality)*1 Bracket2 Screws(for mounting)2 Screws (for bracket)• PRESET (Indoor/Outdoor/Lowlight/Hallway/Lobby/Elevator)• Full HD resolution over coax (HD -TVI)• Supports up to Full HD 1080p @ 30fps • True WDR operation• Superior low -light performance• Amplify existing light with no distance limitation • 3.6mm fixed lens• Full OSD operation through onboard control and UTC (TVI) • Additional 960H analog output (BNC out in Yellow)• Signal distance up to 1600 feet* • 12VDC operation • 5 year warrantyIRIS : Controls various settings related to the lens.ELC (Electronic Light Control) : Used with manual lens only, stay with this mode for this item.EXPOSURE : Adjusts exposure settings.BRIGHTNESS : Adjusts image brightness. (0~ 20) SHUTTER : Auto / Manual Shutter can be selectable - AUTO : Shutter can be selectable automatically.- FLICKER(Flicker Less) : Selects this when picture flickering. The picture flickering can be caused by a clash with the frequency of the installed lighting.- Manual : Controls the shutter speed manually from 1/30 ~1/30000 INTENSIFY : Sets the desired multiple of the digital exposure length. (OFF/x2 ~ x32)AGC(Auto Gain Control): Adjusts the AGC level. (0~10)OSD MENU DetailsEXPOSURE• B/W• EXTERN• LOW• MIDDLE• HIGH• SHARPNESS • GAMMA • MIRROR• FLIPDAY / NIGHTWHITE BAL.SPECO DNRIMAGEEXIT• BRIGHTNESS• SHUTTERBACKLIGHT• HLC• BLC• WDR• AUTO• AWB• COLOR GAIN• ACE• DEFOGSYSTEM• COM.• IMAGE RANGE • COLOR SPACE• FRAME RATE• INTENSIFY• AGC• COLOR• PRIVACYLENS MOTION • OUTPUT • LANGUAGE• COLOR BAR• RESET• CAMERA TITLE• FREQPRESET MODE• INDOOR • OUTDOOR • LOW LIGHT• LOBBY • HALLWAY • ELEVATORMAIN SETUP OSD MENU TREESpeco Technologies is constantly developing product improvements. We reserve the right to modify product design and specifications without notice and without incurring any obligation. Rev. 220818OSD MENU DetailsDAY / NIGHT : Adjust Day / Night options . COLOR : Always Color mode. B/W : Always B/W mode.- ANTI-SAT. : Adjusts the Anti Saturation level (0~20) EXT : please do not use this function.- ANTI-SAT. : Adjusts the Anti Saturation level (0~20) - EXTERN S/W : Selected at manufacturer side- D → N LEVEL (0~20) : Selected at manufacturer side - N → D LEVEL (0~20) : Selected at manufacturer side- DELAY : Adjusts the changing delay time (LOW/MIDDLE/HIGH) AUTO : Day / Night is switching automatically by AGC level. - ANTI-SAT. : Adjusts the Anti Saturation level (0~20) - AGC THRES : Adjusts AGC THRES (0~20) - AGC MARGIN : Adjusts AGC margin (0~20)- DELAY : Adjusts the changing delay time (LOW/MIDDLE/HIGH)COLOR : Adjusts white balancing options.AWB : it goes to optimized color level automatically. COLOR GAIN : Sets the desired Color Gain value (0~20) MAIN OUTPUT : ANALOG OUT (stay as it is) ANALOG OUT : TVI (stay as it is)SPECO DNR : Uses to reduce the background noise in a low luminance environment with 2D + 3D filtering system.- Adjusts the DNR level (LOW/MIDDLE/HIGH)BACKLIGHT : Adjusts backlight options.WDR : WDR illuminates darker areas of an image while retaining thesame light level for brighter areas to even out the overall brightness of images with high contrast between bright and dark area - Adjusts the WDR Weight (LOW/MIDDLE/HIGH) * CVBS out cannot adjust this function.BLC : Produces a clearer image of an object darkened by strong backlighting. - H-POS : Adjusts the horizontal position(0~20) - V-POS : Adjusts the Vertical position(0~20) - H-SIZE : Adjusts the horizontal block size (0~20) - V-SIZE : Adjusts the vertical block size (0~20)HLC : Uses to contain extremely bright areas such as from car headlight, the light can be masked out much of the on-screen details.IMAGE : Adjusts various image options.SHARPNESS: Adjusts sharpness level. Increasing this value, the picture outline becomes stronger and clear. (0~10) GAMMA : Sets the desired Gamma value. (0.45 ~ 0.75) MIRROR : Change the video direction horizontally. FLIP:Change the video direction perpendicularly.ACE (D-WDR) : Uses a digital wide dynamic range to balance dark and over saturated areas within the image.DEFOG : Activated this mode when the video or the weather is foggy. PRIVACY : Used to hide regions of the image. - ZONE NUM : Selects the zone number up 15. - ZONE DISP : Selects desired zone with ON or OFF - H-POS : Adjusts the horizontal position(0~60) - V-POS : Adjusts the vertical position(0~40) - H-SIZE : Adjusts the horizontal size(0~40) - V-SIZE : Adjusts the vertical size(0~40)- Y-LEVEL : Adjusts the yellow color level (0~20) - CR LEVEL : Adjusts the red color level (0~20) - CB LEVEL : Adjusts the blue color level (0~20)MOTION : Adjust motion detection settings.- SENSITIVITY : Sets the desired of “Motion” (0~20)- WINDOW TONE : Adjusts the window tone value (0~60) - WINDOW USE : Adjusts the window setting size (0~3) - WINDOW ZONE : activate or deactivate motion window - DET H-POS :Adjusts the horizontal position(0~60) - DET V-POS : Adjusts the vertical position(0~40) - DET H-SIZE : Adjusts the horizontal size(0~60) - DET V-SIZE : Adjusts the vertical size(0~40) - ALARM : Select Alarm between ON of OFFSYSTEM : Adjusts various camera system options. OUTPUTFRAME RATE: Select the frame rate 30fps/60fps according to the videooutput mode. FREQ COM.- CAM ID : Sets the camera ID for the RS-485 (0~255)- BAUD RATE : Sets the baud rate for the RS-485 (2400~115200). IMAGE RANGE COLOR SPACECOLOR BAR : Manufacturer’s optionLANGUAGE : Sets the desired OSD language CAMERA TITLERESET: Press with long to reset all settings to factory defaults.。

NVIDIA Quadro Sync II 显卡说明书

NVIDIA Quadro Sync II 显卡说明书

1VGA/DVI/HDMI/stereo support via adapter/connector/bracket | 2 NVIDIA Quadro Sync II board sold separately. Learn more at /quadro | 3 Windows 7, 8, 8.1, 10 and Linux | 4 Please refer to http://developer /video-encode-decode-gpu-support-matrix for details on NVIDA GPU video encode and decode support | 5 Product is based on a published Khronos Specification, and is expected to pass the Khronos Conformance Testing Process when available. Current conformance status can be found at /conformance | 6 GPU supports DX 12.0 API Hardware Feature Level 12_1© 2018 NVIDIA Corporation. All rights reserved. NVIDIA, the NVIDIA logo, Quadro, nView, CUDA, and NVIDIA Pascal are trademarks and/or registered trademarks of NVIDIA Corporation in the U.S. and other countries. OpenCL is a trademark of Apple Inc. used under license to the Khronos Group Inc. All other trademarks and copyrights are the property of their respective owners.Extreme Visual Computing Performance in a Single Slot Form Factor .The NVIDIA Quadro P4000 combines a 1792 CUDA core Pascal GPU, large 8 GB GDDR5 memory and advanced display technologies to deliver the performance and features that are required by demanding professional applications. The ability to create an expansive visual workspace of up to four 5K displays (5120x2880 @ 60Hz) with HDR color support lets you view your creations in stunning detail. The P4000 is specially designed with the performance that is necessary to drive immersive VR environments. Additionally, you can create massive digital signage solutions of up to thirty-two 4K displays per system by connecting multiple P4000s via Quadro Sync II 2.Quadro cards are certified with a broad range of sophisticated professional applications, tested by leading workstation manufacturers, and backed by a global team of support specialists. This gives you the peace of mind to focus on doing your best work. Whether you’re developing revolutionary products or telling spectacularly vivid visual stories, Quadro gives you the performance to do it brilliantly.FEATURES >Four DisplayPort 1.4 Connectors 1 >DisplayPort with Audio >3D Stereo Support with Stereo Connector 1 >NVIDIA GPUDirect ™ Support >Quadro Sync II 2 Compatibility >NVIDIA nView ® Desktop Management Software >HDCP 2.2 Support >NVIDIA Mosaic 3 >Dedicated hardware video encode and decode engines 4SPECIFICATIONS GPU Memory 8 GB GDDR5 Memory Interface 256-bit Memory Bandwidth Up to 243 GB/s NVIDIA CUDA ® Cores 1792 System Interface PCI Express 3.0 x16 Max Power Consumption 105 W Thermal Solution Active Form Factor 4.4” H x 9.5” L, Single Slot, Full Height Display Connectors 4x DP 1.4 Max Simultaneous Displays 4 direct, 4 DP 1.4 Multi-Stream Display Resolution 4x 4096x2160 @ 120Hz 4x 5120x2880 @ 60Hz Graphics APIs Shader Model 5.1, OpenGL 4.55, DirectX 12.06, Vulkan 1.05Compute APIsCUDA, DirectCompute, OpenCL ™ UNMATCHED POWER.UNMATCHED CREATIVE FREEDOM. NVIDIA ® QUADRO ®P4000NVIDIA QUADRO P4000 | DATA SHEET | JUN18To learn more about the NVIDIA Quadro P4000 visit /quadro。

Performance Guide for ViDi说明书

Performance Guide for ViDi说明书

Performance Guide for ViDi18-Mar-2019 16:11:24 EDTThinking about PerformanceTool and stream processing timeThroughputPerformance ToolkitApplication DesignTool ParametersNVIDIA GPU Selection and ConfigurationNVIDIA Device Branding SummaryGraphics Card RequirementsConsiderationsEstimating Run-Time PerformanceGlossary of Standard NVIDIA GPU TerminologyMultiple GPUsSystem Configuration for Multi-GPU SystemsWhat About Training Time?Thinking about PerformanceWhat performance aspect is important to you?Tool and stream processing timeIndividual tool processing time is shown in the Database Overview panel:averageThe reported time is the processing time for all of the images processed during the most recent processing.The processing time for a stream containing multiple tools is not available through the ViDi Suite GUI, and you cannot estimate it by summing the tool execution time, as it includes the time required to prepare and transmit view information between tools.When thinking about stream processing, remember that the processing of tools in a ViDi stream is always serialized when you call Stream. Process()Tool.Process() . Only one tool is ever processed at a time unless you explicitly process tools individually using .ThroughputThroughput refers to the total number of images that can be processed per unit time. If your application can process multiple streams concurrently using different threads, it may be able to improve system throughput, although individual tool processing will be slower.Performance ToolkitIn terms of increasing expense (but not efficacy):Application designTool parametersSystem configurationHardware optionsMultiple GPUsApplication DesignThe following table summarizes application design characteristics that may produce faster run-time performance. Application design choices that improve performance typically have minimal impact on the behavior of the system.Design Pattern Why it's Faster Best Bang for Buck But Watch Out ForUse a small number of tools per stream.The processing time for a singleViDi tool does vary significantlynotbased on the amount of informationthat the tool returns.For example, a single Blue tool thatis trained to find 100 differentfeatures runs at the same speed asa tool that is trained to find only asingle feature. Further, the numberof features returned makes only aminimal speed difference.Similarly, a Red tool runs at thesame speed regardless of howmany defects it finds, and a Greentool can classify into 2 classes or2000 classes at the same speed.Start building your application with asingle tool.Avoiding Image Conversions During tool operation, the imagemust be sampled for processing bythe neural network. This samplingrequires a raster (uncompressed)format image such as a bitmap.Performing this conversion takestime.Similarly, if the tool is configured touse a single-channel (grey-scale)image as input, but the suppliedimages are multi-channel colorimages, the luminance value mustbe computed for each image at runtime.Attempt to solve your applicationusing a single-channel grey-scaleimage.Some applications require colorinformation.Reduce the amount of processed data Reducing the number of processeddata by:Using a smaller ROIUsing a maskUsing as few image channels aspossibleImproves processing speed byreducing the total amount of dataprocessed.Restricted ROI ViDi tools need contextualinformation to work well – don'tconstrain the ROI too much.Downsampling is usually not needed– selecting a larger feature size canimprove speed and remove theneed for run-time downsampling.Multi-threading On systems with multiple GPUs,processing multiple streamsconcurrently allows tools to executein parallel, increasing throughput.On single-GPU systems, you canconfigure the system to allowmultiple processes to make use ofthe same GPU. This allows a higherGPU occupancy and can improvethroughput, although tool executiontime will e the --max-process-countcommand line argument to enablemultiple threads to access a singleGPU.To enable multi-process GPUaccess for a runtime applicationusing the local control's GlobalConfimethod:g()control.GlobalConfig("max_process_count=2");Processing time for an individual toolwill increase.C++ (unmanaged)Use of an unmanaged languageenvironment reduces the impact ofsystem activity on tool execution.For low-latency, high-performanceapplications, use the C++ API.Windows is not a RTOSTool ParametersTool parameter choices directly effect tool execution speed, but there is typically a tradeoff between tool speed and accuracy or robustness. Tool Parameter How it Affects Speed Best Bang for Buck But Watch Out ForFeature size At run time, ViDi tools need tosample the entire input image. Thefeature size determines the numberof samples required for a givenimage size. The larger the featuresize, the fewer the samples.O(n)2 increase in speed with largerfeature size.Larger feature sizes may cause thetool to miss features or defects.Use parameter optimization to findan optimal size.Sampling Density Similarly to feature size, thesampling density determines thenumber of samples required for agiven image size.O(n)2 increase in speed with lowersampling density.Risk of missing features or defects.Refinement Parameters The Blue and Red tools includeprocessing-time parameters thatprovide more accurate results at thecost of increasing execution time:Blue tool: PrecisionRed tool: Iterations Increasing the iteration value increases processing time linearly.Low-precision mode If your system meets certain specificrequirements (CUDA ComputeCapability 6.1 or greater), you canenable mlow-precision processingode for any ViDi tool.Enabling low-precision modeconverts any existing trained tool touse low-precision computationduring processing, and it generateslow-precision tools for all futuretraining operations until it isdisabled. (Once a tool has beenconverted to low-precision mode, itmust be retrained to disable low-precision mode.Low-precision tools can executefrom 25% to as much as 50% fasterthan normal-precision tools.Additional run-time speedimprovements for low-precision toolsare seen on systems with TuringTensor cores.Changing a tool to low-precisionmode may change the results thetool produces to a small degree.Generally high-level featureidentification, defect classification,and general classification will beunchanged, but specific feature anddefect regions and scores maychange slightly.NVIDIA GPU Selection and ConfigurationSystem configuration choices directly affect tool processing speed without affecting tool accuracy or behavior. They are the most expensive and hardest to predict the effect of.Configuration Option Why it's Faster Best Bang for Buck But Watch Out ForNVIDIA Device Type The number of CUDA cores isdirectly related to high-precisionprocessing speed and training.The number of standard Tensorcores is directly related toprocessing speed and trainingspeed.The number of Tensor coresTuringis related to processing speed in low-precision mode only. These coresdo not affect standard precisionprocessing or training speed.NVIDIA Driver ModeConsumer-grade gaming-oriented NVIDIA devices only support the WDDM device driver model. This driver is intended to supportgraphics display, not computation.Professional-grade NVIDIA cards support the TCC driver mode, which provides better performance and stability.Select a Quadro or Tesla (or selected Titan)-branded NVIDIA card.If using a GeForce-branded card, be aware that NVIDIA Geforce drivers are updated frequently and may not be compatible with ViDi. Please visit Cognex's support page for driver recommendations.Using TCC mode driver prevents the use of Video output on the GPU card; use onboard video instead.Optimized memoryViDi optimized memory, which is enabled by default, improves performance by overriding the standard NVIDA GPU memory management system.Make sure your card has at least 4GB of GPU memory.Performance improvement is not as significant for cards using the TCC driver.NVIDIA Device Branding SummaryThe following table summarizes the different NVIDIA device types.Class ConsumerProfessionalBrandingGeForceTitanQuadroTeslaVolta Architecture Cards ---Titan V GV100V100Pascal Architecture Cards GTX 1xxx Titan Xp G/GPxxx P100Turing Architecture Cards RTX 2xxx Titan RTX Quadro RTX4xxx T4Video Output Yes Yes Yes ---Price Point$1K $3K $5K $5K+TCC Driver Support --- Yes Yes Yes ECC Memory ------ YesYes Tensor CoresRTX2xxx and newer:Yes Titan V :Yes Titan RTX:Yes Quadro RTX :Yes Quadro GV100:Yes V100:Yes T4:Yes Graphics Card RequirementsNVIDIA® CUDA® enabled GPUCUDA compute capability 3.0 or higherConsiderationsWhile consumer cards and professional cards perform similarly, some considerations should be made:Heat dissipationProfessional cards are intended for continuous duty cycle use and are designed to dissipate heat effectively.SupplyProfessional cards are manufactured by NVIDIA and have a longer product cycle.Performance and ControlProfessional cards support the TCC mode drivers. This allows the GPU to run as a computing device with no display output.This means you will need a second card for display (or use the motherboard's built-in display).Estimating Run-Time PerformanceThe following numbers are an approximate guide to the potential performance increment for different card families (baseline = non-run-time TensorCore, standard mode):ViDi Operating Mode No Tensor Cores (ex GTX)Volta Tensor Cores (ex V100)Turing Tensor Cores (ex T4)Standard 100%150%150%Low-precision125%125%175%Glossary of Standard NVIDIA GPU TerminologyTerm What it isIs it important?CUDA CoreStandard NVIDIA parallel processing unit.Yes . This is the 'standard' measure of NVIDIA GPU processing – the number of CUDA cores. The more cores, the faster the ViDi processing and training.ECC Memory Error-correcting-code memoryHardware support for verifying that memory reads/writes do not contain errors.No Because of the huge number of computations involved in training andprocessing neural networks, the likelihood of a memory error affecting a tool result is very low.TCC Tesla Compute Cluster (Driver).A high-performance driver that is optimized for computational use of an NVIDIA GPU.Not supported by all cardsDisables video output from the card Provides faster training and runtime performanceDiminishes or eliminates the advantages of using ViDi optimized memoryConfigured using the nvdia-smi utilityYes . Whenever possible, customers should select cards that support the TCC driver mode, and they should enable the mode.Tensor CoreFull-precision, mixed-precision (and evt. integer math) parallel processing unit dedicated to matrix multiply operations.Yes . Starting with ViDi 3.2, ViDiautomatically takes advantage of tensor cores for faster processing and training, as long as the user has a Standard or Advanced license.TensorRTNVIDIA framework for optimizing (by using low-precision and integer math) run-time performance of TensorFlow, Caffe, and other standard framework networks running on a GPU with Tensor Cores.No: ViDi uses a proprietary network architecture that is not compatible with Tensor RT.Multiple GPUsExcept under very narrow circumstances, using multiple GPUs in a single system will not reduce ViDi tool training or processing time. What multiple GPUs do is to:can Increase system throughput when your application uses multiple threads to concurrently process images Increase training productivity, by allowing you to train multiple tools at the same timeThere is one circumstance under which multiple GPUs can be used to reduce tool processing time. If you configure your system in MultipleDevic mode, then all installed s are treated as a single . This means that only one tool can be processed at a esPerTool GPU GPU during processing time for the entire server.NoteIn comparison with other Tesla cards, the T4 is oriented toward run-time operation. It supports ViDi training and run-time, but training performance will likely be slower than a V100.In the specific case of a Red Analyze tool, the use of may speed up the tool, especially a tool with a high image-MultipleDevicesPerTool modeto-feature size ratio. However, this potential speed up comes at the expense of latency across all clients.System Configuration for Multi-GPU SystemsWhen configuring a host system for multiple GPUs, keep the following in mind:The chassis may need to provide up to 2KW of powerQuadro and Tesla cards provide better cooling configuration for multiple-card installationsMake sure that the PCIe configuration has 16 PCIe lanes available for each GPUDo not enable SLIWhat About Training Time?Reducing tool training time does not affect your performance at run time, but it can improve the productivity of your development team.ViDi training uses a mixture of CPU and GPU resources. When considering training specifically, there are three phases: computing image statistics, building the model, and then processing the image set with the newly trained model. The model building phase of training usually takes the longest, and it is an iterative process. Each iteration requires that the tool generate training data from all of the training images. If the images are in a non-BMP format, they need to be converted to BMP for each iteration.Tool training is always single-threaded and single GPU. You cannot make training faster using multiple GPUs.canUsing multiple GPUs improve your productivity because you can train multiple tools concurrently.。

自适应分割的视频点云多模式帧间编码方法

自适应分割的视频点云多模式帧间编码方法

自适应分割的视频点云多模式帧间编码方法陈 建 1, 2廖燕俊 1王 适 2郑明魁 1, 2苏立超3摘 要 基于视频的点云压缩(Video based point cloud compression, V-PCC)为压缩动态点云提供了高效的解决方案, 但V-PCC 从三维到二维的投影使得三维帧间运动的相关性被破坏, 降低了帧间编码性能. 针对这一问题, 提出一种基于V-PCC 改进的自适应分割的视频点云多模式帧间编码方法, 并依此设计了一种新型动态点云帧间编码框架. 首先, 为实现更精准的块预测, 提出区域自适应分割的块匹配方法以寻找最佳匹配块; 其次, 为进一步提高帧间编码性能, 提出基于联合属性率失真优化(Rate distortion optimization, RDO)的多模式帧间编码方法, 以更好地提高预测精度和降低码率消耗. 实验结果表明, 提出的改进算法相较于V-PCC 实现了−22.57%的BD-BR (Bjontegaard delta bit rate)增益. 该算法特别适用于视频监控和视频会议等帧间变化不大的动态点云场景.关键词 点云压缩, 基于视频的点云压缩, 三维帧间编码, 点云分割, 率失真优化引用格式 陈建, 廖燕俊, 王适, 郑明魁, 苏立超. 自适应分割的视频点云多模式帧间编码方法. 自动化学报, 2023, 49(8):1707−1722DOI 10.16383/j.aas.c220549An Adaptive Segmentation Based Multi-mode Inter-frameCoding Method for Video Point CloudCHEN Jian 1, 2 LIAO Yan-Jun 1 WANG Kuo 2 ZHENG Ming-Kui 1, 2 SU Li-Chao 3Abstract Video based point cloud compression (V-PCC) provides an efficient solution for compressing dynamic point clouds, but the projection of V-PCC from 3D to 2D destroys the correlation of 3D inter-frame motion and re-duces the performance of inter-frame coding. To solve this problem, we proposes an adaptive segmentation based multi-mode inter-frame coding method for video point cloud to improve V-PCC, and designs a new dynamic point cloud inter-frame encoding framework. Firstly, in order to achieve more accurate block prediction, a block match-ing method based on adaptive regional segmentation is proposed to find the best matching block; Secondly, in or-der to further improve the performance of inter coding, a multi-mode inter-frame coding method based on joint at-tribute rate distortion optimization (RDO) is proposed to increase the prediction accuracy and reduce the bit rate consumption. Experimental results show that the improved algorithm proposed in this paper achieves −22.57%Bjontegaard delta bit rate (BD-BR) gain compared with V-PCC. The algorithm is especially suitable for dynamic point cloud scenes with little change between frames, such as video surveillance and video conference.Key words Point cloud compression, video-based point cloud compresion (V-PCC), 3D inter-frame coding, point cloud segmentation, rate distortion optimization (RDO)Citation Chen Jian, Liao Yan-Jun, Wang Kuo, Zheng Ming-Kui, Su Li-Chao. An adaptive segmentation based multi-mode inter-frame coding method for video point cloud. Acta Automatica Sinica , 2023, 49(8): 1707−1722点云由三维空间中一组具有几何和属性信息的点集构成, 通常依据点的疏密可划分为稀疏点云和密集点云[1]. 通过相机矩阵或高精度激光雷达采集的密集点云结合VR 头盔可在三维空间将对象或场景进行6自由度场景还原, 相较于全景视频拥有更真实的视觉体验, 在虚拟现实、增强现实和三维物体捕获领域被广泛应用[2−3]. 通过激光雷达反射光束经光电处理后收集得到的稀疏点云可生成环境地收稿日期 2022-07-05 录用日期 2022-11-29Manuscript received July 5, 2022; accepted November 29, 2022国家自然科学基金(62001117, 61902071), 福建省自然科学基金(2020J01466), 中国福建光电信息科学与技术创新实验室(闽都创新实验室) (2021ZR151), 超低延时视频编码芯片及其产业化(2020年福建省教育厅产学研专项)资助Supported by National Natural Science Foundation of China (62001117, 61902071), Fujian Natural Science Foundation (2020J01466), Fujian Science & Technology Innovation Laborat-ory for Optoelectronic Information of China (2021ZR151), and Ultra-low Latency Video Coding Chip and its Industrialization (2020 Special Project of Fujian Provincial Education Depart-ment for Industry-University Research)本文责任编委 刘成林Recommended by Associate Editor LIU Cheng-Lin1. 福州大学先进制造学院 泉州 3622512. 福州大学物理与信息工程学院 福州 3501163. 福州大学计算机与大数据学院/软件学院 福州 3501161. School of Advanced Manufacturing, Fuzhou University, Quan-zhou 3622512. College of Physics and Information Engineer-ing, Fuzhou University, Fuzhou 3501163. College of Com-puter and Data Science/College of Software, Fuzhou University,Fuzhou 350116第 49 卷 第 8 期自 动 化 学 报Vol. 49, No. 82023 年 8 月ACTA AUTOMATICA SINICAAugust, 2023图, 以实现空间定位与目标检测等功能, 业已应用于自动驾驶、无人机以及智能机器人等场景[4−7]. 但相较于二维图像, 点云在存储与传输中的比特消耗显著增加[8], 以经典的8i 动态点云数据集[9]为例, 在每秒30帧时的传输码率高达180 MB/s, 因此动态点云压缩是对点云进行高效传输和处理的前提.N ×N ×N 3×3×3为了实现高效的动态点云压缩, 近年来, 一些工作首先在三维上进行帧间运动估计与补偿, 以充分利用不同帧之间的时间相关性. 其中, Kammerl 等[10]首先提出通过构建八叉树对相邻帧进行帧间差异编码, 实现了相较于八叉树帧内编码方法的性能提升; Thanou 等[11]则提出将点云帧经过八叉树划分后, 利用谱图小波变换将三维上的帧间运动估计转换为连续图之间的特征匹配问题. 然而, 上述方法对帧间像素的运动矢量估计不够准确. 为了实现更精确的运动矢量估计, Queiroz 等[12]提出一种基于运动补偿的动态点云编码器, 将点云体素化后进行块划分, 依据块相关性确定帧内与帧间编码模式, 对帧间编码块使用提出的平移运动模型改善预测误差; Mekuria 等[13]则提出将点云均匀分割为 的块, 之后将帧间对应块使用迭代最近点(Iterative closest point, ICP)[14]进行运动估计,以进一步提高帧间预测精度; Santos 等[15]提出使用类似于2D 视频编码器的N 步搜索算法(N-step search, NSS), 在 的三维块区域中迭代寻找帧间对应块, 而后通过配准实现帧间编码. 然而,上述方法实现的块分割破坏了块间运动相关性, 帧间压缩性能没有显著提升.为了进一步提高动态点云压缩性能, 一些工作通过将三维点云投影到二维平面后组成二维视频序列, 而后利用二维视频编码器中成熟的运动预测与补偿算法, 实现三维点云帧间预测. 其中, Lasserre 等[16]提出基于八叉树的方法将三维点云投影至二维平面, 之后用二维视频编码器进行帧间编码; Bud-agavi 等[17]则通过对三维上的点进行二维平面上的排序, 组成二维视频序列后利用高效视频编码器(High efficiency video coding, HEVC)进行编码.上述方法在三维到二维投影的过程中破坏了三维点间联系, 重构质量并不理想. 为改善投影后的点间联系, Schwarz 等[18]通过法线将点映射于圆柱体上确保点间联系, 对圆柱面展开图使用二维视频编码以提高性能. 但在圆柱上的投影使得部分点因遮挡丢失, 影响重构精度. 为尽可能保留投影点数, Mam-mou 等[19]根据点云法线方向与点间距离的位置关系, 将点云划分为若干Patch, 通过对Patch 进行二维平面的排列以减少点数损失, 进一步提高了重构质量.基于Patch 投影后使用2D 视频编码器进行编码, 以实现二维上的帧间运动预测与补偿的思路取得了最优的性能, 被运动图像专家组(Moving pic-ture experts group, MPEG)正在进行的基于视频的点云压缩(Video-based point cloud compres-sion, V-PCC)标准[20]所采纳, 但将Patch 从三维到二维的投影导致三维运动信息无法被有效利用, 使得帧间压缩性能提升受到限制. 针对这一问题, 一些工作尝试在V-PCC 基础上实现三维帧间预测,其中, Li 等[21]提出了一种三维到二维的运动模型,利用V-PCC 中的几何与辅助信息推导二维运动矢量以实现帧间压缩性能改善, 但通过二维推导得到的三维运动信息并不完整, 导致运动估计不够准确.Kim 等[22]提出通过点云帧间差值确定帧内帧与预测帧, 帧内帧用V-PCC 进行帧内编码, 预测帧依据前帧点云进行运动估计后对残差进行编码以实现运动补偿, 但残差编码依旧消耗大量比特. 上述方法均在V-PCC 基础上实现了三维点云的帧间预测,但无论是基于二维的三维运动推导还是帧间残差的编码, 性能改善都比较有限.在本文的工作中, 首先, 为了改善三维上实现运动估计与补偿中, 块分割可能导致的运动相关性被破坏的问题, 本文引入了KD 树(K-dimension tree,KD Tree)思想, 通过迭代进行逐层深入的匹配块分割, 并定义分割块匹配度函数以自适应确定分割的迭代截止深度, 进而实现了更精准的运动块搜索;另外, 针对V-PCC 中二维投影导致三维运动信息无法被有效利用的问题, 本文提出在三维上通过匹配块的几何与颜色两种属性进行相似性判别, 并设计率失真优化(Rate distortion optimization, RDO)模型对匹配块分类后进行多模式的帧间编码, 实现了帧间预测性能的进一步改善. 实验表明, 本文提出的自适应分割的视频点云多模式帧间编码方法在与最新的V-PCC 测试软件和相关文献的方法对比中均取得了BD-BR (Bjontegaard delta bit rate)的负增益. 本文的主要贡献如下:1)提出了针对动态点云的新型三维帧间编码框架, 通过自动编码模式判定、区域自适应分割、联合属性率失真优化的多模式帧间编码、结合V-PCC 实现了帧间编码性能的提升;2)提出了一种区域自适应分割的块匹配方法,以寻找帧间预测的最佳匹配块, 从而改善了均匀分割和传统分割算法导致运动相关性被破坏的问题;3)提出了一种基于联合属性率失真优化模型的多模式帧间编码方法, 在改善预测精度的同时显著减少了帧间编码比特.1 基于视频的点云压缩及其问题分析本文所提出的算法主要在V-PCC 基础上进行1708自 动 化 学 报49 卷三维帧间预测改进, 因此本节对V-PCC 的主要技术做简要介绍, 并分析其不足之处. 其中, V-PCC 编码框架如图1所示.图 1 V-PCC 编码器框架Fig. 1 V-PCC encoder diagram首先, V-PCC 计算3D 点云中每个点的法线以确定最适合的投影面, 进而将点云分割为多个Patch [23].接着, 依据对应Patch 的位置信息, 将其在二维平面上进行紧凑排列以完成对Patch 的打包. 之后,依据打包结果在二维上生成对应的图像, 并使用了几何图、属性图和占用图分别表示各点的坐标、颜色及占用信息. 鉴于Patch 在二维的排列不可避免地存在空像素点, 因此需要占用图表示像素点的占用与否[24]; 由于三维到二维的投影会丢失一个维度坐标信息, 因此使用几何图将该信息用深度形式进行表示; 为了实现动态点云的可视化, 还需要一个属性图用于表示投影点的颜色属性信息. 最后, 为了提高视频编码器的压缩性能, 对属性图和几何图的空像素进行了填充和平滑处理以减少高频分量; 同时, 为了缓解重构点云在Patch 边界可能存在的重叠或伪影, 对重构点云进行几何和属性上的平滑滤波处理[25]. 通过上述步骤得到二维视频序列后, 引入二维视频编码器(如HEVC)对视频序列进行编码.V-PCC 将动态点云帧进行二维投影后, 利用成熟的二维视频编码技术实现了动态点云压缩性能的提升. 但是, V-PCC 投影过程将连续的三维物体分割为多个二维子块, 丢失了三维上的运动信息,使得三维动态点云中存在的时间冗余无法被有效利用. 为了直观展示投影过程导致的运动信息丢失,图2以Longdress 数据集为例, 展示了第1 053和第1 054两相邻帧使用V-PCC 投影得到的属性图.观察图2可以发现, 部分在三维上高度相似的区域,如图中标记位置1、2与3所对应Patch, 经二维投影后呈现出完全不同的分布, 该结果使得二维视频编码器中帧间预测效果受到限制, 不利于压缩性能的进一步提升.2 改进的动态点云三维帧间编码为了在V-PCC 基础上进一步降低动态点云的时间冗余性, 在三维上进行帧间预测和补偿以最小化帧间误差, 本文提出了一个在V-PCC 基础上改进的针对动态点云的三维帧间编码框架, 如图3所示. 下面对该框架基本流程进行介绍.首先, 在编码端, 我们将输入的点云序列通过模块(a)进行编码模式判定, 以划分帧内帧与预测帧. 其思想与二维视频编码器类似, 将动态点云划分为多组具有运动相似性的图像组(Group of pic-tures, GOP)以分别进行编码. 其中图像组中的第一帧为帧内帧, 后续帧均为预测帧, 帧内帧直接通过V-PCC 进行帧内编码; 预测帧则通过帧间预测方式进行编码. 合理的GOP 划分表明当前图像组内各相邻帧均具有较高运动相关性, 因此可最优化匹配块预测效果以减少直接编码比特消耗, 进而提高整体帧间编码性能. 受文献[22]启发, 本文通过对当前帧与上一帧参考点云进行几何相似度判定,以确定当前帧的编码方式进行灵活的图像组划分.如式(1)所示.Longdress 第 1 053 帧三维示例Longdress 第 1 054 帧三维示例Longdress 第 1 053 帧 V-PCC投影属性图Longdress 第 1 054 帧 V-PCC投影属性图11223图 2 V-PCC 从三维到二维投影(属性图)Fig. 2 V-PCC projection from 3D to2D (Attribute map)8 期陈建等: 自适应分割的视频点云多模式帧间编码方法1709cur ref E Gcur,ref Ωmode mode E O R 其中, 为当前帧点云, 为前帧参考点云, 表示两相邻帧点云的几何偏差, 为编码模式判定阈值. 当 值为1时表示当前帧差异较大, 应当进行帧内模式编码; 当 值为0时则表示两帧具有较大相似性, 应当进行帧间模式编码. 另外, 在动态点云重构误差 的计算中, 使用原始点云 中各点与重构点云 在几何和属性上的误差均值表示, 即式(2)所示.N O O (i )R (i ′)i i ′E O,R O R 其中, 为原始点云点数, 和 分别表示原始点云第 点与对应重构点云 点的几何或属性值, 即为原始点云 与重构点云 间误差值.N ×N ×N K 接着, 在进行帧间编码模式判断后, 通过模块(b)进行预测帧的区域自适应块分割. 块分割的目的在于寻找具有帧间运动一致性的匹配块以进行运动预测和补偿. 不同于 等分或 均值聚类, 所提出的基于KD 树思想的区域自适应块匹配从点云质心、包围盒和点数三个角度, 判断分割块的帧间运动程度以进行分割深度的自适应判定,最终实现最佳匹配块搜索.之后, 对于分割得到的匹配块, 通过模块(c)进行基于联合属性率失真优化的帧间预测. 在该模块中, 我们通过帧间块的几何与颜色属性联合差异度,结合率失真优化模型对匹配块进行分类, 分为几乎无差异的完全近似块(Absolute similar block, ASB)、差异较少的相对近似块(Relative similar block,RSB)以及存在较大差异的非近似块(Non similar block, NSB). 完全近似块认为帧间误差可忽略不计, 仅需记录参考块的位置信息; 而相对近似块则表示存在一定帧间误差, 但可通过ICP 配准和属性补偿来改善几何与属性预测误差, 因此除了块位置信息, 还需记录预测与补偿信息; 而对于非近似块,则认为无法实现有效的帧间预测, 因此通过融合后使用帧内编码器进行编码.最后, 在完成帧间模式分类后, 为了在编码端进行当前帧的重构以作为下一帧匹配块搜索的参考帧, 通过模块(d)对相对近似块进行几何预测与属性补偿, 而后将几何预测与属性补偿后的相对近似块、完全近似块、非近似块进行融合得到重构帧. 为了在解码端实现帧间重构, 首先需要组合预测帧中的所有非近似块, 经由模块(e)的V-PCC 编码器进行帧内编码, 并且, 还需要对完全近似块的位置信息、相对近似块的位置与预测补偿信息通过模块(f)进行熵编码以实现完整的帧间编码流程.至此, 整体框架流程介绍完毕, 在接下来的第3节与第4节中, 我们将对本文提出的区域自适应分割的块匹配算法与联合属性率失真优化的多模式帧间编码方法进行更为详细的介绍, 并在第5节通过实验分析进行算法性能测试.3 区域自适应分割的块匹配N B j cur j ref j ∆E cur j ,ref j 相较于二维视频序列, 动态点云存在大量空像素区域, 帧间点数也往往不同. 因此, 对一定区域内的点集进行帧间运动估计时, 如何准确找到匹配的邻帧点集是一个难点. 假设对当前帧进行帧间预测时共分割为 个子点云块, 第 块子点云 与其对应参考帧匹配块 间存在几何与属性综合误差 . 由于重构的预测帧实质上是通过组合相应的参考帧匹配块而估计得到的, 因此精准的帧间块匹配尝试最小化每个分割块的估计误差,以提高预测帧整体预测精度, 如式(3)所示:图 3 改进的三维帧间编码框架Fig. 3 Improved 3D inter-frame coding framework1710自 动 化 学 报49 卷K K N N ×N ×N 为了充分利用帧间相关性以降低时间冗余, 一些工作尝试对点云进行分割后寻找最佳匹配块以实现帧间预测. Mekuria 等[13]将动态点云划分为若干个大小相同的宏块, 依据帧间块点数和颜色进行相似性判断, 对相似块使用迭代最近点算法计算刚性变换矩阵以实现帧间预测. 然而, 当区域分割得到的对应匹配块间存在较大偏差时, 预测效果不佳.为了减少匹配块误差以提高预测精度, Xu 等[26]提出使用 均值聚类将点云分为多个簇, 在几何上通过ICP 实现运动预测, 在属性上则使用基于图傅里叶变换的模型进行运动矢量估计. 但基于 均值聚类的点云簇分割仅在预测帧中进行, 没有考虑帧间块运动相关性, 匹配精度提升受到限制. 为了进一步提高匹配精度, Santos 等[15]受到二维视频编码器中 步搜索算法的启发, 提出了一种3D-NSS 方法实现三维上的匹配块搜索, 将点云分割为 的宏块后进行3D-NSS 以搜索最优匹配块, 而后通过ICP 进行帧间预测.K 上述分割方法均实现了有效的块匹配, 但是,基于宏块的均匀块分割与基于传统 均值聚类的块划分均没有考虑分割块间可能存在的运动连续性, 在分割上不够灵活. 具体表现为分割块过大无法保证块间匹配性, 过小又往往导致已经具有运动连续性的预测块被过度细化, 出现相同运动预测信息的冗余编码. 为了避免上述问题, 本文引入KD 树思想, 提出了一种区域自适应分割算法, 该算法通过迭代进行逐层深入的二分类划分, 对各分割深度下块的运动性质与匹配程度进行分析, 确定是否需要继续分割以实现精准运动块匹配. 算法基本思想如图4所示, 若满足分割条件则继续进行二分类划分, 否则停止分割.Ψ(l,n )其中, 准确判断当前分割区域是否满足运动连续性条件下的帧间运动, 是避免过度分割以实现精准的运动块搜索的关键, 本文通过定义分割块匹配函数来确定截止深度, 如式(4)所示:ρ(n )=max [sign (n −N D ),0]n N D ρ(n )=1ξ(l )l 其中, 为点数判定函数,当点数 大于最小分割块点数阈值 时, ,表示满足深入分割的最小点数要求, 否则强制截止; 为当前深度 下的块运动偏移度, 通过衡量匹配块间的运动变化分析是否需要进一步分割.ξξw ξu 提出的 函数分别通过帧间质心偏移度 估计匹配块间运动幅度, 帧间包围盒偏移度 进行匹ξn ξw ξu ξn T l ξ(l )配块间几何运动一致性判定, 点数偏移度 进行点云分布密度验证, 最后通过 、 与 累加值与分割截止阈值 的比值来整体衡量当前块的运动程度与一致性. 即对于当前分割深度 , 可进一步细化为式(5):其中,w cur w ref u cur u ref n cur n ref l P Max P Min 并且, 、 、 、 、与分别表示当前分割深度下该区域与其前帧对应区域的质心、包围盒与点数,和分别为当前块对角线对应点.ρ(n )=1ξ(l)lξξξξ在的前提下,值反映当前KD 树分割深度下该区域点云的帧间运动情况.值越大帧间运动越显著,当值大于1时,需对运动块进行帧间运动补偿,如果继续分割将导致块的运动一致性被破坏或帧间对应块无法实现有效匹配,从而导致帧间预测失败;值越小说明当前区域点云整体运动变化越小,当值小于1时,需进一步分割寻找可能存在的运动区域.l +1d 对于需要进一步分割的点云块,为了尽可能均匀分割以避免分割后匹配块间误差过大, 将待分割匹配块质心均值作为分割点, 通过以包围盒最长边作为分割面来确定 深度下的分割轴 , 分割轴l = 0l = 1l = 2l = m l = m + 1条件满足, 继续分割条件不满足, 停止分割图 4 区域自适应分割块匹配方法示意图Fig. 4 Schematic diagram of region adaptive segmentation based block matching method8 期陈建等: 自适应分割的视频点云多模式帧间编码方法1711如式(6)所示:Edge d,max Edge d,min d 其中, 和 分别为待分割块在 维度的最大值和最小值.总结上文所述, 我们将提出的区域自适应分割的块匹配算法归纳为算法1. 算法 1. 区域自适应分割的块匹配cur ref 输入. 当前帧点云 与前帧参考点云 输出. 当前帧与参考帧对应匹配块j =1N B 1) For to Do l =02) 初始化分割深度 ;3) Docur j ref j 4) 选取待分割块 和对应待匹配块 ;w u n 5) 计算质心 、包围盒 与块点数 ;ξ(l )6) 根据式(5)计算运动块偏移度 ;ρ(n )7) 根据函数 判定当前分割块点数;Ψ(l,n )8) 根据式(4)计算分割块匹配函数 ;Ψ(l,n )9) If 满足匹配块分割条件:d 10) 根据式(6)确定分割轴 ;cur j ref j 11) 对 与 进行分割;12) 保存分割结果;l +113) 分割深度 ;Ψ(l,n )14) Else 不满足匹配块分割条件:15) 块分割截止;16) 保存匹配块;17) End of if18) While 所有块均满足截止条件;19) End of for图5展示了本文提出的区域自适应分割的块匹配算法对帧Longdress_0536和其参考帧Longdress_0535进行分割后的块匹配结果. 在该序列当前帧下, 人物进行上半身的侧身动作. 观察图5可发现,在运动变化较大的人物上半身, 算法在寻找到较大的对应匹配块后即不再分割; 而人物下半身运动平缓, 算法自适应提高分割深度以实现帧间匹配块的精确搜索, 因而下半身的分块数目大于上半身.4 联合属性率失真优化的多模式帧间编码P Q在动态点云的帧间编码中, 常对相邻帧进行块分割或聚类后依据匹配块相似性实现帧间预测, 并利用补偿算法减少预测块误差以改善帧间编码质量. 其中迭代最近点算法常用于帧间运动估计中,其通过迭代更新待配准点云 相较于目标点云 S t E (S,t )间的旋转矩阵 和平移向量 , 进而实现误差 最小化, 如式(7)所示:N p p i P i q i ′Q p i 其中 为待配准点云点数, 为待配准点云 的第 个点, 为目标点云 中与 相对应的点.但是, 完全依据ICP 配准进行动态点云的三维帧间预测存在两个问题: 首先, ICP 仅在预测块上逼近几何误差的最小化而没考虑到颜色属性偏差引起的匹配块差异, 影响了整体预测精度; 其次, 从率失真角度分析, 对运动变化极小的匹配块进行ICP 配准实现的运动估计是非必要的, 该操作很难改善失真且会增加帧间编码比特消耗.为改善上述问题, 提出了联合属性率失真优化的多模式帧间编码方法. 提出的方法首先在确保几何预测精度的同时, 充分考虑了可能的属性变化导致的预测精度下降问题, 而后通过率失真优化模型,对块依据率失真代价函数得到的最优解进行分类后, 应用不同的编码策略以优化帧间编码方案, 旨在有限的码率约束下最小化编码失真, 即式(8)所示:R j D j j N B R C λ其中, 和 分别表示第 个点云块的编码码率和对应的失真; 是当前帧编码块总数; 表示总码率预算.引入拉格朗日乘子 ,式(8)所示的带约束优化问题可以转换为无约束的最优化问题, 即式(9)所示:当前帧分割可视化当前帧分割效果参考帧分割效果图 5 区域自适应分割的块匹配方法分割示例Fig. 5 Example of block matching method based onadaptive regional segmentation1712自 动 化 学 报49 卷。

VIDEOAUDIO TRANSMISSION SYSTEM, TRANSMISSION METHO

VIDEOAUDIO TRANSMISSION SYSTEM, TRANSMISSION METHO

专利名称:VIDEO/AUDIO TRANSMISSION SYSTEM, TRANSMISSION METHOD, TRANSMISSIONDEVICE, AND RECEPTION DEVICE发明人:MOCHIDA, Yasuhiro,YAMAGUCHI, Takahiro 申请号:EP19838333申请日:20190716公开号:EP3826313A4公开日:20220330专利内容由知识产权出版社提供摘要:An object is to provide a video audio transmission system, a transmission method, a sending device, and a reception device capable of avoiding buffer overflow and buffer depletion in a decoding device and realizing GOP synchronization in encoding devices by eliminating clock deviation among devices. In the video audio transmission system according to the present invention, all of the sending devices supply clocks generated from common time point information to cameras as genlock signals. All of the reception devices supply clocks generated from the common time point information to the decoding devices as genlock signals. Therefore, clock deviation between the devices can be eliminated, and the buffer overflow and the buffer depletion in the decoding device can be avoided. Frame periods of video signals output by a plurality of dispersed cameras can be aligned, and reliable GOP synchronization can be realized by the encoding devices on a latter stage with respect to the cameras.申请人:NIPPON TELEGRAPH AND TELEPHONE CORPORATION更多信息请下载全文后查看。

智能音频设备(Smart-AVI)DVN-2P 2-Port 跨平台DVI-D KVM with U

智能音频设备(Smart-AVI)DVN-2P 2-Port 跨平台DVI-D KVM with U

OTHER
Power
External 100-240 VAC/5VDC2A @10W
Dimensions
10.5”W x 1.875”H x 6”D
Weight
5 lbs.
Approvals
UL, CE, ROHS Compliant
Operating Temp.
32-131°F (0-55 °C)
Front Panel Control
To switch ports using the front panel, press the front button to cycle through the available ports. The selected port number will be indicated on the LED display.

Installation Manual
DVN-2P
2-Port Cross-platform DVI-D KVM with USB 2.0, Stereo Audio, RS-232 and IR Control
Control 2 Computers, PC or Mac on one Display Up To 20 Feet Away
What’s in the Box?
PART NO.
QTY DESCRIPTION
DVN-2PS
1 DVNET2P, 2X1 DVI-D, USB2.0, Audio switch
Power Supply 1 PS5VDC2A
User Manual 1
Technical Specifications
Product - Installation Diagram

Intel Video AI Box配置和部署快速指南说明书

Intel Video AI Box配置和部署快速指南说明书

Develop and verify edge analytics services for On Prem Intel® Video AI Box using BMRA on the Intel® Core™ processor.IntroductionThe Reference System Architectures (Reference System 1) are forward-looking template solutions for fast automated software provisioning and deployment.This document is a quick start guide to configure and deploy Intel® Video AI Box underlying software requirements using the Container Bare Metal Reference System Architecture (BMRA) on Intel® Core™ processors with either Intel® Arc™ Discrete Graphics GPU or Intel® Iris® Xe Integrated Graphics platform. The Reference System is deployed using the On Prem Intel® Video AI Box Configuration Profile with optimized configuration for edge video analytics workloads in a single box in real time for lightweight edge devices. Video Analytics is enabled by OpenVINO™ and a choice of OpenCV or Intel® Deep Learning Streamer (Intel® DL Streamer) as AI-based media analytics frameworks. The platform is accelerated by Intel® Arc™ Discrete Graphics GPU or Intel® Iris® X e Integrated Graphics, as shown in Figure 1.Architecture of On Prem Intel® Video AI BoxFigure 1 shows the architecture diagram of the On Prem Intel® Video AI Box Profile where media analytics frameworks OpenCV and Intel® DL Streamer are containerized and work alongside a Video Analytics base library container including OpenVINO™ and media accelerators and drivers. The provide d container suite is used for microservice-based system architectures.Figure 1: Architecture of Intel® Video AI Box deployment using BMRA on_prem_aibox Profile1 In this document, "Reference System" refers to the Network and Edge Reference System Architecture.Network and Edge Reference System Architectures - On Premises Intel® Video AI Box Quick Start GuideHardware BOMFollowing is the list of the hardware components that are required for setting up Reference Systems:Laptop or server running a UNIX base distribution1x 11th Gen Intel® Core™ with Intel® Iris® X e Integrated Graphics; OR1x 12th Gen Intel® Core™ with Intel® Arc™ Discrete Graphics GPU A380; OR1x 13th Gen Intel® Core™ with Intel® Iris® X e Integrated GraphicsIntel® Arc™ Discrete Graphics GPU A380 (only on 12th Gen Intel Core)Max Performance Turbo BIOS configuration is recommended (refer to Chapter 3.8 of BMRA User Guide)Software BOMFollowing is the list of the software components that are required for setting up Reference Systems:Intel® DL Streamer, GStreamer, OpenCV, FFmpegOpenVINO™Intel® Media SDK/Intel® Video Processing Library (Intel® VPL), Intel® Media Driver forVAAPI, LibvaOpenGL, OpenCL, Level Zero GPU, GPU driversoneAPI Data Analytics Library (oneDAL)XPU ManagerDocker, Docker-composeUbuntu 22.04.2 Desktop (Kernel: 5.19)For more details on software versions for the On Prem Intel® Video AI Box Profile,refer to Chapter 4 of BMRA User Guide listed in the Reference Documentation section.Getting StartedAnsible playbooks are used to install the Bare Metal (BMRA), which sets up the infrastructure for an On Prem Intel® Video AI Box. Figure 2 shows the deployment model for Intel® Video AI Box infrastructure using BMRA.The target device starts with Ubuntu 22.04.2 Desktop only, acting as both Ansible host and target, and it ends with the deployed infrastructure using the on_prem_aibox Reference System profile.Network and Edge Reference System Architectures - On Premises Intel® Video AI Box Quick Start GuideFigure 2: BMRA deployment setup for Intel® Video AI BoxStep 1 - Set Up the SystemThe Intel® Video AI Box is deployed on a single target host running Ubuntu OS. The deployment is on a localhost bare-metal environment (known as target host) and there is no need for a separate Ansible host for this deployment.Target HostInstall necessary packages (some might already be installed):# sudo apt update# sudo apt install -y python3 python3-pip openssh-client git build-essential# pip3 install --upgrade pipStep 2 - Download and InstallTarget Host1.Download the source code from the GitHub repository for the Reference System server:# git clone https:///intel/container-experience-kits/# cd container-experience-kits# git checkout v23.07.1# git submodule update --init2.Set up Python* virtual environment and install dependencies:# python3 -m venv venv# source venv/bin/activate# pip3 install -r requirements.txt3.Install Ansible dependencies for the Reference System:# ansible-galaxy install -r collections/requirements.ymlNetwork and Edge Reference System Architectures - On Premises Intel® Video AI Box Quick Start GuideStep 3 – ConfigureThe On Prem Intel® Video AI Box configuration profile (on_prem_aibox) is used for this deployment.Target Host1.Generate the configuration files:# export PROFILE=on_prem_aibox# make examples ARCH=core# cp examples/k8s/${PROFILE}/inventory.ini .Note: The Intel® Video AI Box is deployed on the target (localhost) so the inventory.ini file does not need updates.2.Copy group_vars and host_vars directories to the project root directory:# cp -r examples/k8s/${PROFILE}/group_vars examples/k8s/${PROFILE}/host_vars .3.Update the host_vars filename with the target machine's hostname:# mv host_vars/node1.yml host_vars/localhost.yml4.If the server is behind a proxy, update group_vars/all.yml by updating and uncommenting the lines for http_proxy,https_proxy, and additional_no_proxy.## Proxy configuration ##http_proxy: ":port"https_proxy: ":port"additional_no_proxy: ",mirror_ip"5.Apply required patches for Kubespray (Even though we do not install Kubernetes, it is needed for compatibility withother Ansible scripts):# ansible-playbook -i inventory.ini playbooks/k8s/patch_kubespray.yml6.(Recommended) You can check the dependencies of components enabled in group_vars and host_vars with thepackage dependency checker:# ansible-playbook -i inventory.ini playbooks/preflight.yml7.(Optional) Verify that Ansible can connect to the target server by running the following command and checking the outputgenerated in the all_system_facts.txt file:# ansible -i inventory.ini -m setup all > all_system_facts.txtStep 4 – DeployTarget HostNow the BMRA on_prem_aibox configuration profile can be deployed on the bare metal system by using thefollowing command:# ansible-playbook -i inventory.ini -b -K playbooks/on_prem_aibox.ymlStep 5 – ValidateTarget Host1.After the successful deployment of the on_prem_aibox profile, the base container-related Docker files and scripts aregenerated in the following location.# ls /opt/intel/base_container/dockerfile/ # Base container Dockerfiles and build scriptstest/ # Test container Dockerfiles, build scripts, and test scripts2.You can use the build and test scripts to build and test the base containers. Following is an example to build and test thedlstreamer base container. The test uses Intel® DL Streamer to detect cars in an input video.# cd /opt/intel/base_container/dockerfile# ./build_base.sh# ./build_dlstreamer.sh# cd /opt/intel/base_container/test#./test_dlstreamer.sh3.On test completion, the results can be checked. If the test is successful, you see PASSED in the test resultfile.# cd ~/nep_validator_data/# cat test_dlstreamer_resultNetwork and Edge Reference System Architectures - On Premises Intel® Video AI Box Quick Start Guide4.On successful test completion, the output video can be seen marked with rectangle bounding boxes and object labels inthe videos directory:# ls ~/nep_validator_data/videosoutput_person-vehicle-bike-detection-2004.mp4Figure 3: Intel® Video AI Box test results with rectangle bounding boxes and object labels over the videosAdditional feature verification tests for the on_prem_aibox configuration profile can be found here:https:///intel/container-experience-kits/tree/master/validation/verification-manual/base_container/ Reference DocumentationThe Network and Edge Bare Metal Reference System Architecture User Guide provides information and a full set of installation instructions for a BMRA.The Network and Edge Reference System Architectures Portfolio User Manual provides additional information for the Reference Architectures including a complete list of reference documents.Other collaterals, including technical guides and solution briefs that explain in detail the technologies enabled in the Reference Architectures are available in the following location: Network & Edge Platform Experience Kits.Document Revision HistoryREVISION DATE DESCRIPTION001 September 2023 Initial release.No product or component can be absolutely secure.Intel technologies may require enabled hardware, software, or service activation.Intel does not control or audit third-party data. You should consult other sources to evaluate accuracy.© Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.0923/DN/WIT/PDF 788715-001US。

116.MIPI-DSI三种Video-Mode理解

116.MIPI-DSI三种Video-Mode理解

116.MIPI-DSI三种Video-Mode理解D- PHY的物理层⽀持HS(High Speed)和LP(Low Power)两种⼯作模式HS模式:低压查分信号功耗⼤⾼速率(80M -1Gbps)信号幅值(100mv-300mv)LP模式:单端信号功耗⼩,速率低(< 10Mbps) 信号幅值(0-1.2V)在⾼速模式下,通道状态是差分的0或1,定义P⽐N⾼时定义为1,P⽐N低时定义为0,此时线上典型电压为差分200mv在LP模式下,只⽤lane0传输数据和时钟,双向数据传输。

链路层的模式分为:Command模式和Video模式。

链路层选择Command模式时,物理层可以为HS模式,也可以为LP模式;链路层选择Video模式时,物理层只能选择HS模式。

Video模式时,物理层只能选择HS模式,下⾯是video模式下传递⼀个些图⽚像素数据包(长包)。

和command模式不同的是Data Type,这⾥的Data Type是3Eh,下⾯有⼀张Data Type的表。

在实际传输这些数据包的时候需要遵守⼀些时序控制。

Video模式⼜分三种⼦模式:1.Non-burst Mode Sync pulses: 在这种模式下,DSI基于各种不同的同步数据包来做数据同步。

这种数据包括:重构,时间校准等。

更具体的请参考DSI协议标准。

2.Non-burst Mode Sync event: 这种模式和第⼀种模式很像,但是这种模式不会发重构和时间校准的数据包,它们只发送⼀种叫做”Sync event”的包。

3.Burst mode: 在horizontal 的时序是⼀样的情况下DSI会把连接的速度提升到Panel⽀持的最⼤速度。

在这种模式下发送RGB 数据包的时间被压缩,以留出更多的时间⽤来传送其他的数据。

为了使能Video模式Host需要发送各种不同的包到panel⽤来设置开始和结束的Porch.以下是Video模式中⽤到的数据包:• VSS: DSI Sync Event Packet: V Sync Start• VSE: DSI Sync Event Packet: V Sync End• BLLP: DSI Packet: Arbitrary sequence of non-restricted DSI packets or Low Power Mode incluing optional BTA.• HSS: DSI Sync Event Packet: H Sync Start• HAS: DSI Blanking Packet: Horizontal Sync Active or Low Power Mode, No Data• HSE: DSI Sync Event Packet: H Sync End• HFP: DSI Blanking Packet: Horizontal Front Porch or Low Power Mode• HBP: DSI Blanking Packet: Horizontal Back Porch or Low Power Mode• RGB: DSI Packet: Arbitrary sequence of pixel stream and Null Packets• LPM: Low Power Mode incuding optional BTA上图顶部有圆弧的代表数据包,长⽅形的代表时序的状态。

ADI视频解决方案汇总

ADI视频解决方案汇总

ADI应用于HDTV应用提出一系列解决方案针对高分辨率电视(HDTV)的应用,提供了以下产品来满足客户的设计需求:一、 Analog Front—AD9983(8bit)/AD9984(10bit)AD9983/4是一颗高速8bit ADC,RGB或YPbPr都可透过AD9983/4数码化。

RGB 最高可解UXGA(1600X1200 60Hz)而YPbPr可解到1080P。

输入前端提供高速2:1 MUX。

研发人员不需要再外接2:1 MUX即可实现输入讯号的切换。

内部提供Macrovision Detect ion and Filtering让软件研发人员更方便快速的处理包含Marcovision的影像讯号。

Auto mated Gain/Offset Adjustment 自动白平衡校正功能更有效解决传统ADC色偏的问题也缩短生产线量产所需的时间。

二、HDMI Rx + Analog Front— AD9388AD9388是ADI第一个支持HDMI 1.3的产品,内建2:1 MUX 可同时接两组HDMI讯号,HDMI 2:1 MUX 前端的Auto HDMI EQ可自动侦测讯号强度/质量做最佳化的放大,最长可支持到50公尺的HDMI Cable,让客户的产品HDMI的兼容性达到最佳化。

ADC的部份有三组高速10bit ADC,ADC前端内建MUX可同时接三组RGB 或 YPbPr等影像讯号。

三、用于数码影像输出(HDMI)功能—AD9389AD9389是165MHz高速的HDMI传送器。

最高可传送1080P@60Hz及1600x1200 60Hz。

只需1.8V单电源即可正常运作。

内建的色彩转换器可将任意的色彩格式(RGB/YUV)转换成任意的色彩输出(RGB/YUV)。

可接受8通道I2S及SPDIF数码音源输入。

工作温度符合工规-40~+85℃。

HDMI传送器符合HDMI V1.1/HDCP V1.1/DVI V1.0的规范。

DVR-E04G-D 4-ch 1080p Lite 1U H.265 eSSD DVR说明书

DVR-E04G-D 4-ch 1080p Lite 1U H.265 eSSD DVR说明书

Key Feature● Powered by eSSD technology● Deep learning based human and vehicle targets classification ofMotion Detection 2.0● Scene-adaptive bitrate control video compression ● Up to 1080p Lite@30 fps encoding capability ● Low power consumption ● Audio via coaxial cableBuilt-In SSD● Minimalist eSSD design brings low power consumption ● Integrated storage design provides the plug-and-play device useEasy Installation● Mini size design saves installation space● Anti-vibration design and light weight satisfy more installation conditionsCompression and Recording●Scene-adaptive bitrate control technology would precisely allocate videos in eSSD according to the actual scene ● Full channel recording at up to 2 MP Lite resolutionSmart Function●Deep learning-based motion detection 2.0 ● Smart search for efficient playbackSpecification Motion Detection 2.0Human/Vehicle Analysis Deep learning-based motion detection 2.0 is enabled by default for all analog channels, it can classify human and vehicle, and extremely reduce false alarms caused by objects like leaves and lights;Quick search by object or event type is supported;RecordingVideo Compression H.265 Pro/H.265Encoding Resolution1080p Lite/720p/720p Lite/WD1/4CIF/CIF/QVGAFrame Rate Main stream:1080p Lite/720p/720p Lite/WD1/4CIF@25 fps (P)/30 fps (N) Note:1080p Lite/720p is 15 fps by defaultSub-stream:4CIF@15 fps; CIF/QVGA@25 fps (P)/30 fps (N)Video Bitrate According to the actual scene, scene-adaptive bitrate control technology would automatically allocates appropriate bitrate to ensure the stability of recording period.Stream Type Video, Video & AudioNote: In order to protect privacy, the device will only record video (without audio) by default. If you want to record audio, you shall manually enable audio recording. For operation details, refer to the user manual.Audio Compression G.711u Audio Bitrate64 Kbps Video and AudioIP Video Input 1-chUp to 1080p resolution Support H.265+/H.265 IP camerasAnalog Video Input 4-chBNC interface (1.0 Vp-p, 75 Ω), supporting coaxitron connectionHDTVI Input1080p25, 1080p30, 720p25, 720p30AHD Input1080p25, 1080p30, 720p25, 720p30HDCVI Input1080p25, 1080p30, 720p25, 720p30CVBS Input SupportHDMI Output1-ch, 1920 × 1080/60 Hz, 1280 × 1024/60 Hz, 1280 × 720/60 Hz VGA Output1-ch, 1920 × 1080/60 Hz, 1280 × 1024/60 Hz, 1280 × 720/60 Hz Video Output Mode HDMI/VGA simultaneous outputAudio Input4-ch via coaxial cable Audio Output1-ch via HDMI Synchronous playback4-chNetworkTotal Bandwidth 64 MbpsIncoming bandwidth: 4 MbpsRemote Connection32Network Protocol TCP/IP, DHCP, HiLookVision, DNS, DDNS, NTP, SADP, SMTP, NFS, UPnP™, HTTPS Network Interface1, RJ45 10/100 Mbps self-adaptive Ethernet interfaceAuxiliary InterfaceCapacity Built-in 512 GB eSSD (480 GB available)Note:The device can store about 4 weeks videos for 4-ch analog cameras (default parameters), this estimation is for reference only.USB Interface Rear panel: 2 × USB 2.0Alarm In/Out N/AGeneralPower Supply12 VDC, 1 AConsumption≤ 5.4 W (with eSSD)Working Temperature-10 °C to 45 °C (14 °F to 113 °F)Working Humidity10% to 90%Dimension (W × D × H)151 × 121 × 46 mm (5.94 × 4.76 × 1.81 inch)Weight≤ 0.5 kg (with eSSD, 1.1 lb.)Note:Analog and IP signal input types are not configurable, and analog channel cannot be disabled to increase the maximum accessible number of network cameras.Physical InterfaceNo.Description No.Description1Video and coaxial audio in4LAN network interface and USB interface 2HDMI interface5Power supply3VGA interface6GNDAvailable ModelDVR-E04G-D。

科视Christie Spyder X20 视频处理器指南说明书

科视Christie Spyder X20 视频处理器指南说明书

CHRISTIE SPYDER X20Fast and flexible video processing and matrix switchingAuditoriums Boardrooms Broadcast studios Conference rooms Control rooms Houses of worship Media centers Post-production Rental and staging Training roomsThe Christie® Spyder X20 is a versatile hardware-based video processor combined with the flexibility of a universal routing switcher. Its integrated source monitoring enables simultaneous, real-time, full frame rate monitoring of all inputs.The Spyder X20 provides users with a 20 megapixel bandwidth to blend, window, mix and scale any source format and then routes the signal to any destination device or combination of display devices – quickly and easily. It is easy to deploy and install because of its advanced architecture and reduces the amount of wires, boxes and rack space traditionally required because everything is all in one unit.Unrestricted multi-window processingThe Christie® Spyder X20 offers a unique architecture that allows for a resolution and video-format-independent environment. Users are no longer restricted to the resolution of a single computer or video source, or a single display destination. Multiple displays can be combined to generate an enhanced resolution to exceed what any single display can support.Ideal for live event and broadcast environments, its20 megapixel bandwidth enables the Spyder X20 to drive multiple displays to achieve higher brightness, image quality and resolution. The Spyder X20 can be used in many different environments and with any combination of display devices.Key featuresAdditional features › 20 megapixel bandwidth › I nternal matrix switching›Universal input/output capabilities – mix and matchmultiple formats with one piece of equipment›Input capability – either 8 or 16 inputs (depending onmodel) that can be a mix of analog BNC and DVI signals›Output capability – 8 outputs that natively support anydisplay from component analog 480i to digital 4K›Built-in conversion for analog/digital, interlaced/progressive, resolution, aspect ratio and refresh rate›2D and 3D capabilities›Manages and displays multiple 3D sources›Define properties for each output independentof each signal›Integrated source monitoring – real-time and fullframe-rate view of all sources connected to the Spyder X20 (either 16 or 8 inputs) on a single output, tiled into either a 4x4 array (X20-1608) or a 4x2 array (X20-0808)›Single point of control for all processing and signaldistribution functions from front panel, PC via Ethernet, or external control system›10-bit processing›Small form factor – (LxWxH): 21.9 x 17.3 x 7.0" (556 x 439x 178mm). Additionally, only one piece of equipment is required so the overall space used in a rack is reduced›Each output individually supports rotation –enabling the creation of vertically-oriented displays›User-definable edge blending and tiling›Create any kind of window border or drop shadowwith adjustable color, width, softness, shadow offset and transparency›Online editing mode allows for preset displays to be builtand edited in preview mode without affecting what the audience is seeing›Built-in image Still Storefunctionality›Built-in VESA calculatorfor custom resolution outputs›Intuitive graphical userinterface (GUI)›Simple cohesive controlof all functions›Redundant hot swappablepower supplies›Optional stereoscopicsupport›Advanced auto-syncfunctionality›Bitmap borders ›Window titling›Optional HDCP supportThis generation of SpyderThe Spyder X20 is designed for users in any environment to take images from unique sources, use a variety of display systems and present the images as intended. It is ideal for applications such as live events, broadcast, high-end boardrooms, command and control, houses of worship and education – anyinstallation that has multi-windowing, multiple displays and processing requirements.The Spyder X20 also offers the flexibility to display 2D and 3Dcontent simultaneously in the same display.Remote PC functionality allows the support of multiple remote servers to match certain securityrequirements and classification levels Software interfaceThe Microsoft® Windows® based control software provides full set-up, configuration, and real-time controlwith an easy-to-use interface.V ista Advanced is a Windows-based software interface that makes it easy to configure and control the Spyder X20.R educed rack spaceB itmap bordersFor the most current specification information, please visit Copyright 2022 Christie Digital Systems USA, Inc. All rights reserved. All brand names and productnames are trademarks, registered trademarks or tradenames of their respective holders. Performance specifications are typical. Due to constant research, specifications are subject to change without notice. CHRI4546_SpyderX20_Brochure_DEC_21_EN。

Hikvision 深度学习基于模式检测的 CCTV 系统说明书

Hikvision 深度学习基于模式检测的 CCTV 系统说明书

Key Feature● Deep learning-based motion detection 2.0 for all analogchannels● Deep learning-based face picture comparison ● Deep learning-based perimeter protection● H.265 Pro+/H.265 Pro/H.265/H.264+/H.264 video compression ● HDTVI/AHD/CVI/CVBS/IP video inputs ● Audio via coaxial cable● Up to 8/16-ch IP camera inputs (up to 8 MP) ● Up to 10 TB capacity per HDDCompression and Recording● H.265 Pro+ can be enabled to improve encoding efficiency and reduce data storage costs ● Recording at up to 8 MP resolutionStorage and Playback● Smart search for efficient playbackSmart Function● Deep learning-based motion detection, perimeter protection (line crossing and intrusion detection), or facial recognition ● Support multiple VCA (Video Content Analytics) events for both analog and smart IP camerasNetwork & Ethernet Access● Compatible with major Wi-Fi dongle products in the market● Hik-Connect & DDNS (Dynamic Domain Name System) for easy network management ● Output bandwidth limit configurableSpecificationModel iDS-7204HUHI-M1/FA iDS-7208HUHI-M1/FA Face Picture Comparison and Search¹Face picture comparison Face picture comparison, face picture search (1-ch face picture comparison alarm for HD analog camera)Face picture library Up to 16 face picture libraries, with up to 500 face pictures in total (each picture ≤ 1 MB, total capacity ≤ 80 MB)Motion Detection 2.0¹Human/Vehicle Analysis Deep learning-based motion detection 2.0 is enabled by default for all analog channels, it can classify human and vehicle, and extremely reduce false alarms caused by objects like leaves and lights;Quick search by object or event type is supported;Perimeter Protection¹Human/VehicleAnalysisUp to 4-chRecordingVideo compression H.265 Pro+/H.265 Pro/H.265/H.264+/H.264Encoding resolution Main stream:8 MP@8 fps/5 MP@12 fps/3K@12 fps/4MP@15 fps/8 MP Lite@15 fps/3 MP@18fps1080p/720p/WD1/4CIF/VGA/CIF@25 fps(P)/30 fps (N)*: 8 MP@8 fps is only available forchannel 1, 8 MP Lite is only available forchannel 2 to channel 4.Main stream:8 MP@8 fps/5 MP@12 fps/3K@12 fps /4MP@15 fps/3 MP@18 fps1080p/720p/WD1/4CIF/VGA/CIF@25 fps(P)/30 fps (N)Sub-stream:WD1/4CIF/CIF@25 fps (P)/30 fps (N)Video bitrate32 Kbps to 10 Mbps Dual stream SupportStream type Video, Video & Audio Audio compression G.711uAudio bitrate64 KbpsVideo and AudioIP video input22-ch (up to 6-ch)Enhanced IP mode on:4-ch (up to 8-ch), each up to 4 Mbps4-ch (up to 12-ch)Enhanced IP mode on:8-ch (up to 16-ch), each up to 4 Mbps Up to 8 MP resolutionSupport H.265+/H.265/H.264+/H.264 IP camerasAnalog video input 4-ch8-chBNC interface (1.0 Vp-p, 75 Ω), supporting coaxitron connectionHDTVI input 8 MP@15 fps, 5 MP@20 fps, 3K@20 fps, 4MP@30 fps, 4 MP@25 fps, 3 MP@18 fps,1080p@30 fps, 1080p@25 fps, 720p@608 MP@15 fps, 5 MP@20 fps, 3K@20 fps, 4MP@30 fps, 4 MP@25 fps, 3 MP@18 fps,1080p@30 fps, 1080p@25 fps, 720p@60fps, 720p@50 fps, 720p@30 fps, 720p@25 fps fps, 720p@50 fps, 720p@30 fps, 720p@25 fpsAHD input 5 MP, 4 MP, 1080p@25 fps, 1080p@30 fps, 720p@25 fps, 720p@30 fps HDCVI input 4 MP, 1080p@25 fps, 1080p@30 fps, 720p@25 fps, 720p@30 fps CVBS input PAL/NTSCCVBS output 1-ch, BNC (1.0 Vp-p, 75 Ω),resolution: PAL: 704 × 576, NTSC: 704 × 480VGA output 1-ch, 1920 × 1080/60Hz,1280 × 1024/60Hz,1280 × 720/60Hz,HDMI/VGA simultaneous output1-ch, 1920 × 1080/60Hz,1280 × 1024/60Hz,1280 × 720/60Hz,HDMI/VGA simultaneous outputHDMI output 1-ch, 2K (2560 × 1440)/60Hz,1920 × 1080/60Hz,1280 × 1024/60Hz,1280 × 720/60Hz,1024 × 768/60HzHDMI/VGA simultaneous output1-ch, 4K (3840 × 2160)/30Hz,2K (2560 × 1440)/60Hz,1920 × 1080/60Hz,1280 × 1024/60Hz,1280 × 720/60Hz,1024 × 768/60HzHDMI/VGA simultaneous outputAudio input31-ch (up to 4-ch is optional), RCA (2.0 Vp-p, 1 KΩ)1-ch (up to 8-ch is optional), RCA (2.0 Vp-p,1 KΩ)4-ch via coaxial cable8-ch via coaxial cableAudio output1-ch, RCA (Linear, 1 KΩ)Two-way audio1-ch, RCA (2.0 Vp-p, 1 KΩ) (using the first audio input) Synchronous playback4-ch8-ch NetworkRemote connections3264Network protocols TCP/IP, PPPoE, DHCP, Hik-Connect, DNS, DDNS, NTP, SADP, NFS, iSCSI, UPnP™, HTTPS, ONVIFNetwork interface 1, RJ45 10/100 Mbps self-adaptiveEthernet interface1, RJ45 10/100/1000 Mbps self-adaptiveEthernet interfaceWi-Fi Connectable to Wi-Fi network by Wi-Fi dongle through USB interface Auxiliary interfaceSATA 1 SATA interfaceCapacity Up to 10 TB capacity for each diskSerial interface RS-485 (half-duplex)USB interface Front panel: 1 × USB 2.0Rear panel: 1 × USB 2.0Front panel: 1 × USB 2.0Rear panel: 1 × USB 3.0Alarm in/out3N/A (optional to support)GeneralPower supply12 VDC, 1.5 A12 VDC, 2 A Consumption (withoutHDD)≤ 10 W≤ 15 W Working temperature-10 °C to +55 °C (+14 °F to +131 °F)Working humidity10% to 90%Dimension (W × D × H)315 × 242 × 45 mm (12.4 × 9.5 × 1.8 inch)Weight (without HDD)≤ 1.16 kg (2.6 lb)≤ 2 kg (4.4 lb)Note:1: Face picture comparison, motion detection 2.0 and perimeter protection cannot be enabled at the same time. Enable one function will make the other two unavailable.2: Enhanced IP mode might be conflicted with smart events (face picture comparison, motion detection 2.0 and perimeter protection) or other functions, please refer to the user manual for details.3: The number of audio inputs and alarm in/out can be optional. The tag on the package will describe audio input and alarm in/out parameters. For example, “4A+8/4ALM” means your device has 4 audio inputs, 8 alarm inputs and 4 alarm outputs. If your device only has 1 audio input, the tag may not describe it.*: The rear panel of iDS-7204HUHI-M1/FA provides 4 video input interfaces. Physical InterfaceNo. DescriptionNo. Description1 VIDEO IN 7 AUDIO IN, RCA connector2 USB interface 8 LAN network interface3 VIDEO OUT 9 RS-485 serial interface4 HDMI interface 10 12 VDC power input5 VGA interface11 GND 6AUDIO OUT, RCA connectorAvailable ModeliDS-7204HUHI-M1/FA, iDS-7208HUHI-M1/FA04261302010517。

AXIS M2035-LE 小型高清摄像头说明书

AXIS M2035-LE 小型高清摄像头说明书

DatasheetAXIS M2035-LE Bullet CameraAffordable camera with deep learningIdeal for tough environments and rough weather,AXIS M2035-LE delivers HDTV in1080p.With a deep learning process-ing unit,it enables unique opportunities for analytics based on deep learning on the edge.AXIS Object Analytics offers detection and classification of humans,vehicles,and types of vehicles.And Axis Edge Vault protects your Axis device ID and simplifies authorization of Axis devices on your network.Available in two lens alternatives,this IK08-rated camera offers flexible and cost-efficient installation including PoE.Edge-to-edge technology allows for smart pairing with Axis speakers.Plus,a spacious,sealed back box ensures secure cable management.>HDTV1080p>Compact,lightweight design>Analytics with deep learning>Zipstream supporting H.264/H.265>Outdoor-ready with IR illuminationAXIS M2035-LE Bullet Camera Models AXIS M2035-LEAXIS M2035-LE BlackAXIS M2035-LE8mmAXIS M2035-LE8mm BlackCameraImage sensor1/2.9”progressive scan RGB CMOSLens Fixed iris,fixed focus,IR correctedAXIS M2035-LE:3.2mm,F1.4Horizontal field of view:101°Vertical field of view:54°Minimum focus distance:1.2mAXIS M2035-LE8mm:7.5mm,F1.6Horizontal field of view:39°Vertical field of view:22°Minimum focus distance:3mDay and night Automatically removable infrared-cut filterMinimum illumination With LightfinderAXIS M2035-LE:Color:0.16lux at50IRE,F1.4 B/W:0.03lux at50IRE,F1.4 0lux with IR illumination on AXIS M2035-LE8mm: Color:0.17lux at50IRE,F1.6 B/W:0.03lux at50IRE,F1.6 0lux with IR illumination onShutter speed1/19000s to1/5sSystem on chip(SoC)Model CV25Memory1024MB RAM,512MB Flash ComputecapabilitiesDeep learning processing unit(DLPU) VideoVideo compression H.264(MPEG-4Part10/AVC)Main and High Profiles H.265(MPEG-H Part2/HEVC)Motion JPEGResolution1280x960to320x240(4:3)1920x1080to640x360(16:9)Frame rate Up to25/30fps with power line frequency50/60Hz in H.264and H.265aVideo streaming Multiple,individually configurable streams in H.264,H.265and Motion JPEGAxis Zipstream technology in H.264and H.265Controllable frame rate and bandwidthVBR/ABR/MBR H.264/H.265Multi-viewstreamingUp to2individually cropped out view areas in full frame rateImage settings Compression,color,brightness,sharpness,contrast,whitebalance,exposure control,motion-adaptive exposure,WDR:upto115dB depending on scene,text and image overlay,privacymasks,mirroring of imagesRotation:0°,90°,180°,270°,including Corridor FormatPan/Tilt/Zoom Digital PTZAudioAudio output Audio features through portcast technology:two-way audioconnectivity,voice enhancerSmart pairing with Axis speakers via edge-to-edge technology NetworkSecurity IP address filtering,HTTPS b encryption,IEEE802.1x(EAP-TLS)b network access control,user access log,centralized certificatemanagementNetwork protocols IPv4,IPv6USGv6,HTTP,HTTPS b,HTTP/2,TLS b,QoS Layer3DiffServ,FTP,SFTP,CIFS/SMB,SMTP,Bonjour,UPnP®,SNMPv1/v2c/v3(MIB-II),DNS,NTP,NTS,RTSP,RTP,SRTP/RTSPS,TCP,UDP,IGMPv1/v2/v3,RTCP,DHCPv4/v6,SSH,LLDP,MQTT v3.1.1System integrationApplicationProgrammingInterfaceOpen API for software integration,including VAPIX®andAXIS Camera Application Platform;specifications at One-click cloud connectionONVIF®Profile G,ONVIF®Profile M,ONVIF®Profile S,andONVIF®Profile T,specification at Event conditions I/O:manual triggerDevice status:above operating temperature,above or belowoperating temperature,below operating temperature,withinoperating temperature,IP address removed,network lost,newIP address,system readyVideo:average bitrate degradation,tampering,day-night modeApplication:motion alarm,VMD4,VMD3Scheduled and recurring:scheduled eventEdge storage:recording ongoing,storage disruption,storagehealth issues detectedMQTT subscribeEvent actions Record video:SD card and network shareUpload of images or video clips:FTP,SFTP,HTTP,HTTPS,networkshare and emailPre-and post-alarm video or image buffering for recording oruploadNotification:email,HTTP,HTTPS,TCP and SNMP trapOverlay text,day/night modeMQTT publishBuilt-ininstallation aidsPixel counterLevel gridAnalyticsAXIS ObjectAnalyticsObject classes:humans,vehicles(types:cars,buses,trucks,bikes)Features:line crossing,object in area,crossline counting BETA,occupancy in area BETA,time in area BETAUp to10scenariosMetadata visualized with color-coded bounding boxesPolygon include/exclude areasPerspective configurationONVIF Motion Alarm eventMetadata Object data:Classes:humans,faces,vehicles(types:cars,buses,trucks,bikes),license platesConfidence,positionEvent data:Producer reference,scenarios,trigger conditionsApplications IncludedAXIS Object Analytics,AXIS Video Motion Detection,activetampering alarmSupport for AXIS Camera Application Platform enablinginstallation of third-party applications,see /acapCybersecurityEdge security Software:Signed firmware,brute force delay protection,digestauthentication,password protection,AES-XTS-Plain64256bitSD card encryptionHardware:Axis Edge Vault cybersecurity platformSecure element(CC EAL6+),system-on-chip security(TEE),Axisdevice ID,secure keystore,signed video,secure boot,encryptedfilesystem(AES-XTS-Plain64256bit)Network security IEEE802.1X(EAP-TLS)b,IEEE802.1AR,HTTPS/HSTS b,TLSv1.2/v1.3b,Network Time Security(NTS),X.509Certificate PKI,IP address filteringDocumentation AXIS OS Hardening GuideAxis Vulnerability Management PolicyAxis Security Development ModelAXIS OS Software Bill of Material(SBOM)To download documents,go to /support/cybersecu-rity/resourcesTo read more about Axis cybersecurity support,go to/cybersecurityGeneralCasing IP66-/IP67-,NEMA4X-and IK08-ratedAluminum and plastic casingAXIS M2035-LE:White NCS S1002-BAXIS M2035-LE Black:Black NCS S9000-NSustainability PVC free,BFR/CFR freePower Power over Ethernet(PoE)IEEE802.3af/802.3at Type1Class3Typical5W,max12.95WConnectors RJ4510BASE-T/100BASE-TX PoEIR illumination OptimizedIR with power-efficient,long-life855nm IR LEDsRange of reach20m(65.6ft)or more depending on the scene Storage Support for microSD/microSDHC/microSDXC cardSupport for SD card encryption(AES-XTS-Plain64256bit)Recording to network-attached storage(NAS)For SD card and NAS recommendations see Operating conditions -30°C to50°C(-22°F to122°F)Start-up temperature:-30°C(-22°F)Maximum temperature according to NEMA TS2(2.2.7):74°C (165°F)Humidity:10–100%RH(condensing)Storage conditions -40°C to65°C(-40°F to149°F) Humidity5–95%RH(non-condensing)Approvals EMCCISPR24,CISPR35,EN55032Class A,EN55035,EN61000-6-1,EN61000-6-2,FCC Part15Subpart B Class A,ICES-3(A)/NMB-3(A),KC KN32Class A,KC KN35,RCM AS/NZS CISPR32Class A,VCCI Class ASafetyIEC/EN/UL62368-1,CAN/CSA C22.2No.62368-1,IEC/EN/UL60950-22,CAN/CSA-C22.2No.60950-22,IEC62471,IS13252EnvironmentIEC60068-2-1,IEC60068-2-2,IEC60068-2-6,IEC60068-2-14,IEC60068-2-27,IEC60068-2-78,IEC/EN60529IP66/IP67,IEC/EN62262IK08,NEMA250Type4X,NEMA TS2(2.2.7-2.2.9)NetworkNIST SP500-267Dimensions Length:170mm(6.7in)ø101mm(4.0in)Weight491g(1.1lb)IncludedaccessoriesInstallation guide,Windows®decoder1-user license,Torx®L-key,connector guardOptionalaccessoriesAXIS T94B03L Recessed MountAXIS T94B02D Pendant KitAXIS T94B01P Conduit Back BoxAXIS T94B02M J-Box/Gang Box PlateAXIS T94mounts for various installationsAXIS Surveillance CardsFor more accessories,see VideomanagementsoftwareAXIS Companion,AXIS Camera Station,video managementsoftware from Axis Application Development Partners.For moreinformation,see /vmsLanguages English,German,French,Spanish,Italian,Russian,SimplifiedChinese,Japanese,Korean,Portuguese,Polish,TraditionalChinese,Dutch,Czech,Swedish,Finnish,Turkish,Thai,VietnameseWarranty5-year warranty,see /warrantya.Reduced frame rate in Motion JPEGb.This product includes software developed by the OpenSSL Project for use in the OpenSSL Toolkit.(),and cryptographic software written by Eric Young (*****************).©2021-2023Axis Communications AB.AXIS COMMUNICATIONS,AXIS,ARTPEC and VAPIX are registered trademarks ofAxis AB in various jurisdictions.All other trademarks are the property of their respective owners.We reserve the right tointroduce modifications without notice.T10172619/EN/M12.2/2307。

dpm adaptive原理

dpm adaptive原理

dpm adaptive原理DPM Adaptive原理概述•DPM Adaptive是指动态传输模式自适应技术(Dynamic Play Mode Adaptive),用于在线视频播放中的自适应码率选择。

•在视频传输过程中,根据网络状况实时调整播放的码率,以提供更好的用户体验。

原理详解1. 动态码率调整•DPM Adaptive通过实时监测网络带宽和延迟情况,动态调整视频的码率。

•当网络带宽较低或延迟较高时,会自动降低码率以适应网络状况;当网络带宽较高或延迟较低时,会自动提高码率以提高观看质量。

2. 媒体文件分片•在DPM Adaptive中,视频文件会被分成多个小片段进行传输。

•每个小片段的时长通常在几秒到十几秒之间,这样可以更精细地控制视频码率的调整。

3. 播放器与服务器的交互•在视频播放过程中,播放器会与服务器进行交互,获取不同码率的小片段。

•播放器会根据当前网络状况选择合适的码率片段进行播放,以实现码率自适应。

•当网络状况改变时,播放器会再次向服务器请求适合当前状况的码率片段,保证播放的稳定性和流畅度。

4. 码率选择算法•DPM Adaptive中常用的码率选择算法有BOLA(Bitrate-Optimization Based on Ladder-shaped Association)和ABR(Adaptive Bitrate)等。

•这些算法根据网络条件和视频质量评估,选择合适的码率片段进行播放。

•BOLA算法在选择码率时综合考虑了当前缓冲区的状态、带宽预测和视频质量;ABR算法则主要根据带宽和缓冲区来动态调整码率。

应用场景1. 在线视频服务•DPM Adaptive广泛应用于各大视频平台和流媒体服务,如YouTube、Netflix等。

•用户通过在线观看视频时,可以获得更好的观影体验,避免了视频卡顿、加载过慢等问题。

2. 直播•在直播场景中,DPM Adaptive也发挥重要作用。

ASUS ProArt Display PA32UCX Monitor说明书

ASUS ProArt Display PA32UCX Monitor说明书

Key Features4 times the resolution of full HD 1080p with HDR for stunning details and image quality IPS technology is optimized for the finest image quality with 178°wide-viewing-angle International color standard 100% sRGB / 100% Rec. 709 color space for digital images and video production Factory pre-calibrated with ASUS advanced gray-scale tracking technology to guarantee the △E color difference value is less than 2ProArt Palette to adjust color parameters delivers all consistency and quickly adjust to see different color performance USB-C connection for DisplayPort, USB data transmission and supports 90W power delivery Embedded four ports for connecting mouse, keyboard, or other USB devices A convenient mounting clamp is included to help free up desk space Ergonomically-designed stand with tilt, swivel, pivot, and heightadjustmentsDisplay Panel Size : 32-inch (81.28cm) Wide Screen (16:9) Display Viewing Area(HxV) : 708.48 x 398.52 mm Panel Backlight / Type : IPS True Resolution : 3840 x 2160Pixel Pitch : 0.1845 mm Display Colors : 1073.7M (10 bit)Color Saturation : 100% sRGB / 100% Rec. 709Brightness : 350 cd/m² (typical), 400 cd/m² (peak) Contrast Ratio: 1000:1 (typical)ASUS Smart Contrast Ratio (ASCR) : 100,000,000:1 Viewing Angle (CR ≧10) : 178°(H)/178°(V)Response Time : 5ms (GTG)Flicker-free : Yes HDR support : Yes, HDR-10Video Feature Trace Free Technology : Yes ProArt Preset : 11 modes (Standard / sRGB /Rec. 709 / DCI-P3 / Rapid Rendering / HDR / DICOM / Scenery / Reading / User mode1 / User mode2)Color Accuracy : △E < 2ProArt Palette :Yes Gamma Adjustment : Yes (Support Gamma 2.6, 2.4, 2.2, 2.0, and 1.8) Color Adjustment : 6-axis (R, G, B, C, M, Y) Color Temperature Selection : 5 modes Picture-in-Picture : Yes Picture-by-Picture : Yes (support up to 2 windows)QuickFit+: Yes (Center Marker / Safety Area / Ruler / Customization)HDCP support : Yes, 2.2Adaptive-Sync support: Yes (40 ~ 60Hz)Audio Feature 2W x 2 stereo, RMSIO ports Signal Input :DisplayPort1.2 x 1HDMI (v2.0) x 2USB-C x 1 (90W PD)USB Hub:USB3.1 Type-A x 4Earphone Jack : Yes (3.5mm Mini-jack)Signal FrequencyDigital Signal Frequency : 29~160 KHz (H) / 40~60Hz (V)Power Consumption Power On (Typical): < 36W Power Saving Mode : < 0.5W ; Power Off Mode : 0.4W (Hard Switch)100-240V,50/60HzMechanical DesignTilt : +23°~ -5°Swivel : +30°~ -30°Pivot : 90°(Clockwise& Anticlockwise)Height Adjustment : 0 ~ 130 mm VESA Wall Mounting : 100 x 100 mm Desk C-Clamp Support DimensionsPhys. Dimension with Stand (WxHxD) : 727.08 x (471.48~601.48) x 245 mm Phys. Dimension without Stand (WxHxD) : 727.08 x 428.13 x 67.72 mm Box Dimension (WxHxD) : 840 x 516 x 280 mm WeightNet Weight (Esti.) : 12.6 Kg, Without Stand (Esti.) : 8.03 Kg,Gross Weight (Esti.) : 17.6 Kg Accessories Power cord,USB-C cable (optional),HDMI cable (optional),DisplayPort cable (optional),USB-C to USB-A cable (optional),Desk C-Clamp,Calibration Report,Quick Start Guide,Warranty Card,Welcome CardCompliance StandardEnergy Star®,FCC,UL/cUL,cTUVus,ICES-3,CB,CE,ErP,WEEE,ISO 9241-307,UkrSEPRO,CU,CCC,CEL,CECP,BSMI,Taiwan Energy Label,RCM,AU MEPS,VCCI,PSE,PC Recycle,J-MOSS,KC,KCC,SDoC,e-Standby,PSB,BIS,VN MEPS,TCO,RoHS,NOM,CEC,WHQL Windows 7/8.1/10,EPEAT Bronze,TÜV Flicker Free,TÜV Low Blue Light,Calman Verified,VESA DisplayHDR 400* Power Consumption is measuring a screen brightness of 200 nits without audio / USB / Card reader connection **All specifications are subject to change without notice Display PA329CVSpec SheetDisplayPort1.2USB-AEarphone JackHDMI(v2.0)USB-A USB-C HDMI(v2.0)。

DisplayPort 1.2兼容性图形卡要求说明书

DisplayPort 1.2兼容性图形卡要求说明书

Graphics Card RequirementComputer with DisplayPort 1.2-compatible graphics card (e.g. AMD Radeon with AMD Eyefinity) required for video wall mode. Backward compatible with most DisplayPort 1.1a equipment running current graphics drivers with feature set limited to that of your equipment. Compatibility with older graphics cards not guaranteed. MST-compliant DisplayPort 1.2 graphics cards are limited to a bandwidth of 21.6 Gbps amongst all monitors with higher resolution monitors using up more bandwidth. 1080p monitors will use up approximately 22% of bandwidth, whereas 4K monitors will use 40% or more. As each monitor will be different, it is necessary to verify the percentage of bandwidth being used by each monitor in the display settings interface of your graphics card. If the total percentage of bandwidth taken up by all the connected monitors exceeds 100%, an image will not display on one or more of them.Maximum supported video resolutions and number of monitors will be dependent upon your graphics card. Check the specifications of your graphics card to determine its capabilities.4-Port DisplayPort Multi-Monitor Splitter, MST Hub, 4K 60Hz UHD, DP1.2, TAAMODEL NUMBER:B156-004-V2Displays the same image on 4 DisplayPort monitors, extends the desktop across them, or combines all into one large monitor. Ideal for digital signage in schools, churches, conference rooms, trade shows and retail outlets.DescriptionThe B156-004-V2 4-Port DisplayPort 1.2 MST Hub connects up to four DisplayPort monitors to the DisplayPort output on your computer. Ideal for digital signage in schools, churches, conference rooms, trade shows, hotels and retail outlets, this hub allows you to display the same image on all four monitors, extend the desktop across them, or combine all four into one large monitor in video wall mode.The B156-004-V2 is compliant with DisplayPort 1.2 and backward compatible with versions 1.1 and 1.1a with the feature set being limited to that of your equipment. It supports high-definition video resolutions up to 1920 x 1080 (1080p) per monitor. In video wall mode, this hub supports expanded video resolutions, such as 4K @ 60Hz (3840 x 2160) in a 2 x 2 monitor configuration or 7680 x 1080 in a vertical 4 x 1 configuration.This MST hub also supports HDCP, EDID, DDC and 48-bit Deep Color (16 bits per channel), as well as DTS-HD, Dolby TrueHD and 7.1-channel surround sound audio. It works with all operating systems. Plug-and-play convenience means no software or drivers are needed. A built-in six-inch cable connects directly to a DisplayPort source. LEDs indicate when the monitors are receiving a signal. The B156-004-V2 complies with the Federal Trade Agreements Act (TAA) for GSA Schedule purchases.FeaturesConnects Up to 4 DisplayPort Monitors to Your Computer’s DisplayPort Output HighlightsSupports UHD video resolutions up to 4K @ 60Hz (3840 x 2160) qBuilt-in 6 in. cable connectsdirectly to DisplayPort sourceqSupports up to 48-bit DeepColor (16 bits per channel)qWorks with all operatingsystemsqSupports DTS-HD, DolbyTrueHD and 7.1-channelsurround soundqSystem RequirementsComputer with DisplayPort 1.2-compatible graphics card (e.g.AMD Radeon with AMDEyefinity) required for video wall mode.qBackward compatible with mostDisplayPort 1.1a equipmentrunning current graphics driverswith feature set limited to that of your equipment. Compatibilitywith older graphics cards notguaranteed.qMac OS X does not supportMST for NVIDIA and IntelGraphics Processor Units,limiting video display onconnected monitors to mirrormode.qMST-compliant DisplayPort 1.2graphics cards are limited to abandwidth of 21.6 Gbpsamongst all monitors with higher resolution monitors using upmore bandwidth. 1080pmonitors will use upapproximately 22% ofbandwidth, whereas 4Kmonitors will use 40% or more.As each monitor will be different, it is necessary to verify thepercentage of bandwidth beingused by each monitor in thedisplay settings interface of your graphics card. If the totalpercentage of bandwidth takenup by all the connected monitors exceeds 100%, an image willnot display on one or more ofthem.qMonitor(s) with DisplayPortinput.qPackage IncludesB156-004-V2 4-Port DisplayPort1.2 MST Hubq1 / 4SpecificationsIdeal for digital signs in schools, churches, conference rooms, trade shows and retail settings q Displays same image on 4 monitors simultaneously in mirror mode q Extends desktop across 4 monitors in extended modeq Combines 4 monitors into one large monitor in video wall mode q Built-in 6 in. cable connects directly to DisplayPort source q LEDs indicate when monitors are receiving a signalqMeets the Latest Performance StandardsSupports HD video resolutions up to 1920 x 1080 per monitorq Supports expanded resolutions in video wall mode, such as 4K @ 60Hz (3840 x 2160) in a 2 x 2monitor configuration or 7680 x 1080 in a vertical 4 x 1 configuration qSupports HDCP, EDID and DDCq Supports 48-bit Deep Color (16 bits per channel)q Supports DTS-HD, Dolby True HD and 7.1-channel surround sound audio q Backward compatible with DisplayPort 1.1 and 1.1aqEasy to Use Almost Anywhere Works with all operating systemsq Plug and play—no software or drivers requiredqTAA CompliantComplies with Federal Trade Agreements Act (TAA) for GSA Schedule purchasesq External power supply withNEMA 1-15P plug and 5 ft. cord (Input: 100–240V, 50/60 Hz,0.5A; Output: 5V, 2A)qOwner’s manualq2 / 43 / 4© 2023 Eaton. All Rights Reserved.Eaton is a registered trademark. All other trademarksare the property of their respective owners.4 / 4。

2.4g长距离视频 音频传输说明书 - TW-2000型号

2.4g长距离视频 音频传输说明书 - TW-2000型号

Step by step guide Transmitter connection
Install antenna; Connect Video source and Audio source to Transmitter’s input Connect power source Select transmitter’s channel
Powered by Tyhong T-sec
Connect Audio source (Mic etc.) and Video source (camera etc.) to transmitter
Frequency: 2.4Ghz 8 channels
Transmission Frequency (8 Channels)
CH1: 2378MHz CH5 2450MHz
Receiver:
Install antenna; Connect output wires to displayer and speaker (or recorder etc.) Connect power source (12V 1A) Select same channel as transmitters
User Manual
2.4g long range Video/Audio transmission -----Model TW-2000
2.4g long range Video/Audio transmission -----Model TW-2000
Features:
Transmission distance without obstacle: 2000meter with 2dbi antenna 2.4GHZ ISM frequency Audio*2 , Video*1 Sender/Receiver Support 8 channels PLL Technology achieve stable video/image Long life working.
  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

1. INTRODUCTION
Video data hiding presents a more challenging task compared to image data hiding. Digital video is a very promising host candidate that can carry a large amount of data (payload) and its potential for secret communications is largely unexplored. Since a video is formed from a sequence of frames, it presents the data hider with the possibility to embed and send a large amount of data. Digital video is compressed - both spatially using transforms such as the discrete cosine transform (DCT) and temporally using motion compensation, before being transmitted. Now, after the host video, with embedded data, has been subjected to compression, it may often be that the distortion for the transform domain coefficients is large enough so as to make the retrieval of the hidden data impossible. The perturbation of the coefficients depends on the severity of the compression. In image coding, we modulate the coefficients such that their perturbation is within a pre-specified bound and the amount of perturbation can be predicted from the compression in the spatial domain. In video, due to the exploitation of the spatial and temporal redundancies, there is enhanced compression and this makes the coefficient perturbation difficult tቤተ መጻሕፍቲ ባይዱ predict. The bandwidth of the transmission medium determines the allowable bitrate for the video. It is a challenge to design a robust data hiding system where the hiding parameters scale properly with reduced bitrates and one has to trade-off (a significant amount of) data embedding capacity to obtain robustness at higher compression rates. E.g. to make an image data hiding system robust to JPEG1 compression, we need to hide at a lower quality factor, than the JPEG compression attack. For a video compression format, say MPEG-2,2 the number of variable parameters are many more (e.g. variation in bitrate, Group of Pictures size, and so on).
2. PAST WORK - SELECTIVE EMBEDDING IN COEFFICIENTS SCHEME
In our scheme, the input video is decompressed at first into a sequence of frames, as shown in Fig. 1 which provides a detailed block diagram representation of the video data hiding method. We embed data in the luminance (Y) component only. Here, we have employed the “Selective Embedding in Coefficients” (SEC) scheme for hiding.3 In the SEC scheme, 8×8 DCT of non-overlapping blocks are taken and the coefficients are divided by the JPEG quantization matrix at design quality factor. A uniform quantizer of step size ∆ is used on the DCT domain coefficients of the host image. Data is embedded through the choice of the scalar quantizer. The quantization index modulation (QIM)4 scheme uses even and odd multiples of ∆ to store 0 and 1, respectively. Only those quantized DCT coefficients which lie in a certain low frequency band and whose magnitude exceeds a certain threshold are used for hiding - hence “selective embedding” occurs. For a coefficient below the threshold, an erasure occurs. The embedded bitstream is then encoded and transmitted. The decoder does not know explicitly the exact locations where the data is hidden but it uses the same local criteria (low frequency band for embedding and use of a magnitude threshold) as the encoder to guess the locations of hidden data. Channel noise may cause an insertion (decoder guessing incorrectly that there is hidden data) or a deletion (decoder guessing incorrectly that there is no data) which may lead to de-synchronization and decoding failure. We use turbo-like codes with strong error-correction capability and channel erasures at the encoder3 to account for this problem. Using the entire set of coefficients that lie in a designated low-frequency band, long codewords can be constructed to achieve very good correction ability. We use repeat-accumulate (RA) codes5 in our experiments because of their simplicity and near-capacity performance for the erasure channels. A rate 1 q RA encoder involves q-fold repetition, pseudorandom interleaving and accumulation of the resultant bit-stream. Thus, once the data is embedded using the SEC scheme, error resilience is added using the RA codes. The modified frames are then converted to a MPEG-2 video. The video is subjected to MPEG-2 compression attacks (change in bitrate, variation in the Group of Pictures (GOP) size). Then, the received video is decoded to a sequence of frames, from which decoding (of the embedded data per frame) is performed iteratively using the sum-product algorithm.6
相关文档
最新文档