视频编辑外文翻译文献

合集下载

影视后期制作与编辑研究外文文献翻译

影视后期制作与编辑研究外文文献翻译

外文文献翻译:原文+译文文献出处:Bird D. The study of film post-production and editing [J]. Screen Education, 2016,3(2):78-89.原文The study of film post-production and editingBird DAbstractFilm and television post-production, sound of words, images and other audio-visual means a combination of highly integrated creation, is the last procedure of film and television program production, the level of production is directly related to the quality of the program, the following is the history of the development, characteristics, pay attention to the place to talk about the film and television post-production. In the process of development of film and television production, film and television post-production and editing and pre-production are equally important. If you want to do the film and television post-production, must according to the requirements of the film and television program production, choose the right means of editing.Key words: Film and television; Post-production; The editor1 IntroductionFrom Hollywood to create the illusion of the world, to the television news focus on the real life, to a frenzy of advertising, no do not affect our life. In the past, the film and television program production is the job of the professional personnel, for the public is ignorant. For decades, digital technology in the process of film and television production,comprehensive computer gradually replaced many of the original film and television equipment, and each link has played a major role in film and television production. The use of film and television production has been extremely expensive professional hardware and software, non-professional personnel it is difficult to see the equipment, not to mention, skilled use of these tools to make your own work. As PC performance significantly increase, prices falling, film and television production from the previous professional hardware equipment to gradually move on PC platform, the original identity of professional software also gradually transplanted to the platform, the price also is becoming more and more popular. At the same time, the application of film and television production from professional film and television production to expand to the computer games, multimedia, network, home entertainment, such as wider fields. Many homework personnel in the industry with a large number of movie lovers, now a computer can use their own hands, to make his TV programs. Digital non-linear editing not only combines the advantages of traditional film and television editor, also on the further development, is the significant progress video editing technology. Starting in the 80 s, digital non-linear editing in film production gradually replaced the traditional way, become a standard method of film editing. With the rapid development of technology of film and television production, post-production and take on the responsibilities of the play a very important: the production of special lens. Special lens is refers to the lens which cannot be obtained by direct shot. Early film and television special effects are mostly through the model production, special photography, and optical synthesis and so on traditional methods to complete, mainly in the process of shooting stage and developcomplete. The use of computers for special-effects provides more and better method, also make a lot of the past must be used to model and photography means complete special effects can be completed by computer, so more special effects become the post-production work. The production of special effects shots - refers to the special effects shots taken not directly get shot. Early film and television advertising, film and television special effects are mostly through the model production, special photography, the traditional methods such as optical synthesis, mainly in the stage of the film and print. The use of computer digital provides more better means for the production of special effects, also make a lot of the past must be used to model and the photography done special effects can be completed through the computer technology, so the more special effects become the post-production work. Can be seen from the above analysis, linear editing and nonlinear editing each have characteristics, in the later stages of the film and television program editing in both is be short of one cannot, in order to make the film and television late edit program, need according to the requirements of the program production, combined with the characteristics of two kinds of edit mode, selects the appropriate edit mode.2 The development of film post-production technologyIn the early 1990 s, developed countries such as Canada, the United States the use of computer technology, multimedia technology and the combination of film and television production, achieve success, launched a desktop editing studio, is today's nonlinear editing audio workstation. Nonlinear editing (Nonlinear Edit) is a product of traditional equipment with computer technology, which USES digital computer recordsall the digital files stored continuously update and Edit video program. Makes any fragments can watch immediately and any change at any time, in this way can finish "edit" original as efficient as possible, such as editing, switching and lens special transformation, and completed by computer digital video editing, and generates a complete video playback to the video surveillance equipment or transfer to the video. In essence, this technique provides a convenient, quick and efficient method of TV editing. Today, non-linear editor has become the late film and television production of an important means. Similarly, linear editor is another important way. Were, in fact, so-called linear editing by video recorder one-to-one or second team a desktop editing, video recorder mother will bring material by electric - electric copy in another video on second edition copy, among this completed such as access point Settings, transitions are analog signals to analog signals, because once the conversion is completed record became the track, so can't modify, once need to insert the new material or change a certain length of the lens, the later will all be redone. Nonlinear editing is relative to the case of linear editing. They both are important means of film and television production late edit. Before did not appear in non-linear editing, linear editor in the late film and television production plays an important role absolutely, now formed both in the film and television production late edit hand in photograph reflect, they both have their own characteristics and advantages. Linear editing and has a very close relationship between nonlinear editing, linear editing is nonlinear editing's predecessor, first of all, on the basic ideas and art concept, linear editing and nonlinear editing is the same, which is the core of post-production. Second, many professional concepts and terminology, and both arethe same. Nonlinear editing software such as the famous Speed Razor Pro user manual in the city's "key", "bright" and "downstream" of the concept of linear editing, and even the whole of this manual is according to the function of linear editing senior special machine to write.3 Characteristics of film post-production and editingAfter decades of development, the linear editing technique has been very mature, the use of weaving machine, catalog, the controller directly to video material for operation, intuitive interface, concise, simple operation, using the combination of edit, insert, edit, images and sound can be edited respectively, coupled with subtitles machine, special machine, time base corrector, etc., can fully meet the needs of production, finish like a lens, skill set, in and out of the point set, subtitles and graphic overlay, sound effects and music with addition of post-production. Linear editing system based on magnetic tape record carrier, linear array signal according to the time, looking for material VCR to roll with the search, can only on one dimensional time axis according to the order of the lens as a group search, cannot jump, the choice of the material is very time consuming. Difficult to show the modified electronic edit mode is based on linear record tape, normally only in editing order record, although into edit mode allows replacement has been recorded on magnetic tape sound or images, but required to replace the clips and tapes replaced clips at the same time, and not to add or delete, can't change the length of the program. For programs of this modification is very inconvenient, because any a TV, a TV program from samples to finalized often after a lot of editing. In the programmed production an important problem is the counterpart ofthe mother with the wear and tear. The essence of linear edit mode is the original material copied to another process on the tape. Because the signal is mainly analog video in linear editing system, when we were in the edit and take copy of signal in transmission vulnerable to external disturbance and editing process, resulting in loss of signal. On the basis of the previous edition, each version can cause the decrease of the quality of the image editor, or make a stunt there will be a loss of signal. Heavily damaged video recorder, tape is easy to damage. Edit a few minutes of TV, should choose to hundreds or even thousands of cameras, video search edit back and forth, make the VCR mechanical wear serious, VCR operation strength, and shorten service life. Other record tape shortcomings exposed constantly too, such as tensile deformation, distortion, brittle, magnetic and scratches will affect the quality of the tape. System structure is complex, operability is relatively lower. Linear editing system connection is complex, there are many different kinds of equipment and equipment performance is uneven, different indicators, when they connected together, can cause large attenuation of the video signal. In addition, a lot of equipment used at the same time, has left many operators, operating process is complex, and operability is relatively lower. Because of this, with the development of new technology, a new concept of audio and video production and the corresponding hardware equipment non-linear editing arises at the historic moment.4 Correct selection of film post-production editing modeEverything is relative, linear editing and nonlinear editing each have characteristics, each has advantages, both in the film and television post-production is be short of one cannot. If you want to do the film and television post-production, must according to therequirements of the film and television program production, choose the right means of editing. First, in the film and television commercials and documentaries, project opening is given priority to with nonlinear editing is suitable for production. Because, in the film and television advertising films need a lot of use multilayer superposition of images, motion, transparent, fast and slow motion picture, subtitles special processing, 3D, such as processing color, animation effect. In the documentary, project opening production, in addition to the application of special effects, but also the introduction of the extensive use of longer than 5 seconds. Using the nonlinear editing system to deal with these effects will be easier. Secondly, in the newsreel is used in the production of traditional linear editing suit. In order to increase the amount of information, TV news generally requires each lens length less than 5 seconds, and newsreel is rarely used special effects, most of the content is like a lens directly. Commonly used linear editing component connection, in which editors’newsreel, can reduce the loss in the process of signal transmission, in order to better ensure the quality of programs. Again, straight at the scene, live shows, entertainment, live classroom situation, general is given priority to with using linear editing system equipment. The record straight, the nature of the live recording the response speed of the equipment is very timely, especially live, can't stop once I start, can only be a success, not appear a little mistake, otherwise we are lost. In such cases, using traditional linear editing system is perfect. But in live sports match, because of the need to repeat some wonderful, special, two edit mode should be used at the same time, in a linear editing is given priority to with nonlinear editing system, in this way can both cooperate with each other very well meet the needs of some special.译文影视后期制作与编辑研究Bird D摘要影视后期制作是文字、声音、画面等多种视听手腕结合起来的高度综合创作,是影视节目生产的最后一道工序,制作水平的高低直接关系到节目的质量,以下就起进展的历程、特点、注意的地方来谈谈影视的后期制作。

2中英文2双语广播电视编导新闻传播专业毕业设计外文文献翻译成品:电视新闻纪录片的编辑策略

2中英文2双语广播电视编导新闻传播专业毕业设计外文文献翻译成品:电视新闻纪录片的编辑策略

Editing Strategies in Television News DocumentariesAbstract:Richard J. SchaeferThis study describes the editing techniques used in four renowned television news documentaries that aired between 1954 and 1982. It is informed by Peirce5s theories of signs, and realist and symbolic film theory, as well as some of the understandings common to broadcast journalists. The analysis attempts to bridge subdisciplinary boundaries to advance an accessible vocabulary for discussing journalistic representational strategies. The prevalence of continuity and thematic editing styles, special transitional effects, audio track synchronization, and differing cutting rates was quantitatively analyzed and linked to classic film realism and montage strategies. The quantitative findings and a comparative case study analysis of the structural nuances of each documentary illustrate the variety of representational strategies used by network journalists. These findings are discussed in light of analysts5 assertions that televised reports have become increasingly journalist centered.Key Words:TV documentary,semiotics ,editing strategy,film theory,production techniquesFor decades, culturally oriented critics have studied the routine practices of print and broadcast journalists. Altheide and Snow (1979), Epstein (1973), Glasser and Ettema (1989), McManus (1994), and Tuchman (1972, 1978) examined journalism from the broader context of organizational and professional routines. Their studies provide a functionalist alternative to journalists5 understandings of news work. The researchers described determinations of news value, attempts to balance sources, and objective styles of representation as efforts to deflect criticism and legitimate news practices. By naturalizing their professional routines, network journalists were able to meet commercial imperatives by producing news and documentary reports more efficiently. In light of this ethnographic perspective and a belief in the media’s social responsibilities, other researchers have relied on an information transmission model to examine journalistic representations. Gans (1979), Gitlin (1980), the Glasgow University Media Group (1982), Graber (1988), Gunter (1987), and Robinson (1986) used content analyses and reception studies to support claims that standard journalistic practices often fail to meet their full potential for conveying information.Patterson (1993) turned to longitudinal content comparisons when considering press performance. He described recent political coverage asbecoming increasingly negative and journalist centered. This more negative style privileges journalists’voices over those of the politicians and others featured in news reports. Thus, even the journalistic practice of previous decades has been used as a basis for evaluating contemporary press practices.Peirce’s Semiotics and Film TheoryIn his semiotic work, Peirce (1940) distinguished between the iconic, indexical, and symbolic qualities of signs. Iconic signs bear a resemblance to and convey many of the details andcharacteristics of the objects they represent. Indeed, photographic signs are icons because they look like the objects they represent. Peirce wrote that photographic signs were also indexical, because the photo-graph is a by-product or trace of the thing it represents (much as a footprint communicates a step and a weathervane signifies wind direction). Signs can also have symbolic qualities that convey arbitrary and conventional meanings. Symbolic meaning is not derived from a sign’s relationship to actual events so much as its conventional usage and propositional appropriateness within a broader semiotic argument.According to Peirce (1940), the three different qualities of signs are not exclusive. A visual image may have a combination of iconic, indexical, and symbolic overtones. The iconic and indexical qualities of contemporary imaging technologies, including film and video, have enabled audiovisual signs to rival the written transcript and written description as the most accurate representations of events. This ''camera of record” approach is based on the detail evidentin film and video representations and journalistic guarantees that the images are authentic.This study examined four significant and well-distributed works from a single broadcast journalism genre. It utilized a case study approach reinforced by a quantitative analysis of four editing variables to reveal the programs5 varied editing strategies. Therefore, it explored a frequently overlooked structural aspect of broadcast journalism.The DocumentariesEach of the four documentary telecasts aired during different periods in a genre that scholars have labeled the prestige documentary (Bluem, 1965; Carroll, 1978; Freed, 1972; Rosenthal, 1988). The four programs were chosen, in part, because they achieved such notoriety that much has been written about them. The programs are still available for viewing in many public and university libraries. This makes it possible for readers to see the documentaries for themselves.CodingThe programs were coded on a shot-by-shot basis. This allowed four formalvariables to be tabulated for each visual transition in the documentaries. The following four variables were tabulated, because, when taken in combination, they provide insights into the prevalence of realist and montage editing strategies:(1) Shot length: The duration in seconds and tenths of seconds of each visual image. This variable indicates cutting rate, or how many visual edits were made per minute. High cutting rates suggest more overtly artificial and fragmented editing strategies. Low cutting rates suggest a less fragmented “camera of record” approach to representation.(2) Use of straight cuts or special effect edits: Whether visual transitions used cut edits or more elaborate special effects, such as dissolves or fades. Frequent use of the latter would typically convey a sense of artificiality and reduce classic continuity realism.(3) Style of visual edit: Whether the visual edit was a continuity edit, a montage edit, a jump cut or transitional edit, or an edit that had no apparent visual logic. Transitions between shots recorded at a single site and without any apparent breaks in action were characterizedas continuity edits. Montage edits reinforced traditional or avant-garde symbolic understandings. If a transition could have been characterized as both a continuity and montage edit, it was counted only as a continuity technique, because its use in the continuity sequence fostered a more iconic and indexical realistic interpretation. Jump cuts make viewers aware that an event has been condensed through editing. Thus, they destroy the illusion of continuity. Transitional edits are sometimes considered a particular type of continuity edit. They typically begin with an exterior shot of a building or outside detail, and then transition to a shot of a scene inside the building. Finally, some transitions between shots lacked a clear visual logic and were therefore labeled as such.(4) Audiovisual synchronization: Whether or not the primary audio track was edited with synch sound that appeared to have been recorded with the visual image. This synchronized sound would reinforce realistic interpretations. Sounds that appeared not to have been recorded on location with the visual image, and were presumably added during the editing stage, were categorized as asynchronous. The presence of such overlaid asynchronous sounds reinforces a more artificial and symbolically complex editing strategy.Production Techniques of the DocumentariesSubject matter and individual producer preferences influenced the construction of the four documentaries. The relatively slow pacing (3.4 cutting rate) of the 1954McCarthy broadcast could be partly attributable to the fact that Executive Producer Edward R. Murrow rejected the quick-paced editing style common to the newsreels of the 1940s and 1950s (Yeager, 1956, p. 202). Instead, Murrow adopted a more classic realist strategy that relied on the iconicityand indexicality of “camera of record” footage of McCarthy acting like a political bully.This classic realist approach was practical because, as a prominent newsmaker, Senator McCarthy left a trail of authentic film images in his wake.Murrow and Producer Fred Friendly were able to collect those images and select clips that showed McCarthy abusing his congressional powers. Although Murrow’s live lead-ins provided some critical context for the authenticated clips, the images of McCarthy were presented in a seemingly unmanipulated manner. This technique emphasized the indexicality of the images as traces of real events. Thus, rather than Murrow merely issuing a subjective attack on McCarthy, the senator’s own actions appeared to be presented for viewers to judge for themselves.ConclusionThe use of traditional montage and independently edited audio and visual tracks in the four documentaries appears to reinforce some press critics5 claims that there has been a noticeable rise in overtly subjective and journalist-centered reporting during the last few decades. However, it should be remembered that all these documentaries presented the strong editorial sentiments of their producers. Even the Report on Senator McCarthy, which was most obviously grounded in classic realism, conveyed the subjective editorial assertions of its producers. In fact, by emphasizing the indexical properties of the imagery, classic realist representations can present subjective arguments in a seemingly objective andnondiscursive manner—a manner that is even less self-evidently biased than montage strategies. Therefore, if classic realist representations are giving way to more montage-structured journalistic efforts, it would follow that observers would be more likely to notice journalists taking editorial stands in contemporary television reports.Despite the fact that each of the subsequent documentaries relied more heavily on synthetic montage techniques than the previous documentaries, this four documentary analysis does not prove that broadcast journalists are now turning to montage over classic realism. To make that claim, it would be necessary to conduct rigorous longitudinal analyses of larger representative samples of news, news magazines, and documentary reports.In this study I have attempted to bridge some of these subdisciplinary boundaries by integrating baseline concepts and vocabulary from film theory, Peircian semiotics, and the jargon of professional journalists. Just as manipulating editing techniques advances the art of television reporting, using a commonly accessible vocabulary for analyzing the techniques of nonfictional editing can advance the art of “reading” television journalism.ReferencesAdatto, K. (1990, May 28). The incredible shrinking sound bite. New Republic, pp. 20-23. Alexander, S. L. (1988). CBS News and subpoenas duces tecum, 1971-1987. Communica tions and the La w, 10(4), 3-16.Altheide, D. L., & Snow, R. P. (1979). Media logic. Beverly Hills, CA: Sage. Arnheim, R. (1967). Film a s a rt. Berkeley: University of California Press.Bazin, A. (1967). Wha t is cinema ? (H. Gray, Trans.). Berkeley: University of California Press. Bazin, A. (1971). Wha t is cinema ? Vol. II (H. Gray, Trans.). Berkeley: University of California Press.Benjamin, B. (1984). CBS Benja min Report: CBS Reports The Uncounted Enemy:A Vietna mdeception: An exa mina tion. Washington, DC: Media Institute. Benjamin, B. (1988). Fa ir pla y: CBS, Genera 1 Westmorela nd, a nd how a television documenta ry went wrong. New York: Harper & Row.Berkowitz, D. (1990). Refining the gatekeeping metaphor for local television news. Journa 1 of Broa dca sting & Electronic Media , 34, 55-68. Bluem, W. A. (1965). Documenta ry in America n television. New York: Hastings House.Carroll, R. L. (1978). Factual television in America: An analysis of network television documentaryprograms, 1948-1975 (Doctoral dissertation, University ofWisconsin—Madison, 1978). Disserta tion Abstra cts Interna tiona 1, 39(01), A5.Carter, H. (Economou, R., Producer). (1983, April 23). Uncounted enemy, unproven conspira cy.Inside edition. New York: Public Broadcasting Service.Compesi, R. J., & Sheriffs, R. E. (1985). Sma 11 forma t television production. Boston: Allyn & Bacon.G. Crile (Producer). (1982, Jan. 23). The uncounted enemy: A Vietnamdeception (M. Wallace, Correspondent, & I. W. Klein, Editor). New York: CBS.Curtin, M. (1993). Packaging reality: The influence of fictional forms on the early development oftelevision documentary. Journa lism Monogra phs, 137.Davis, P. (Producer) (1971, Feb. 23 and March 23). The selling of the Pentagon. (R. Mudd, Correspondent). CBS reports. New York: CBS. Deleuze, G. (1986). Cinema 1: The movement-ima ge (H. Tomlinson & B. Habberjam, Trans.).Minneapolis: University of Minnesota Press.Deleuze, G. (1989). Cinema 2: The time-ima ge (H. Tomlinson & R. Galeta, Trans.). Minneapolis:University of Minnesota Press.Drew, D. G., & Caldwell, R. (1985). Some effects of video editing on perceptions of television news. Journa lism Qua rterly, 62, 828-831.Eisenstein, S. (1949). Film form: Essa ys in film theory (J. Leyda, Ed.& Trans.). New York: Harcourt,Brace.Epstein, E. J. (1973). News from nowhere. New York: Random House. Fallows, J. (1996, February). Why Americans hate the media. Atla ntic Monthly, 277(2), 45-64.Freed, F. (1972). The rise and fall of the television documentary. Television Qua rterly, 10(1), 55 62.Friendly, F. W. (Producer). (1954, March 9). Report on Senator McCarthy (Ed. R. Murrow, Correspondent & Executive Producer). See it now. New York:CBS.Friendly, F. W. (1967). Due to circumsta nces beyond our control. New York: Random House. Gans, H. J. (1979). Deciding wha t5s news: A study of CBS Evening News,NBC Nightly News,Newsweek a nd Time. New York: Vintage Books.Gitlin, T. (1980). The whole world is wa tching: Ma ss media in the ma king a nd unma king of the New Left. Berkeley: University of California Press. Glasgow University Media Group (1982). Rea lly ba d news. London: Writers& Readers.Glasser, T. L., & Ettema, J. S. (1989). Investigative journalism and the moral order. Critica 1 Studies in Mass Communica tion, 6, 1-20.译文:电视新闻纪录片的编辑策略Richard J. Schaefer摘要:在本文中,阐述了1954 年至1982 年间播出的四部著名电视新闻纪录片中使用的编辑技巧。

影视视频剪辑外文文献翻译字数3000多

影视视频剪辑外文文献翻译字数3000多

影视视频剪辑外文文献翻译字数3000多XXX final product。

The article also examines the XXX editing practices。

including the use of digital tools and are。

Finally。

the article looks at the XXX.nVideo editing is an essential aspect of the film and n n process。

It XXX final product。

The editing process can make or break afilm or n show。

as it can greatly impact the XXX。

In this article。

we will XXX to the final product.The XXX EditingXXX。

It allows filmmakers to shape the story。

control the pace。

and create a XXX。

a XXX's job is to take the raw footage and turn it into a XXX audience。

They must make ns about what footage to include。

what to cut。

and how to arrange it to createthe desired effect.Technology and EditingTechnology has XXX。

Digital tools and are have made the process faster。

more efficient。

and XXX。

collaborate with others in real-time。

动画设计电视广告论文中英文外文翻译文献

动画设计电视广告论文中英文外文翻译文献

动画设计电视广告论文中英文外文翻译文献Copywriting for Visual MediaBefore XXX。

and film advertising were the primary meansof advertising。

Even today。

local ads can still be seen in some movie theaters before the start of the program。

The practice of selling time een programming for commercial messages has e a standard in the visual media XXX format for delivering shortvisual commercial messages very XXX.⑵Types of Ads and PSAsThere are us types of ads and public service announcements (PSAs) that XXX ads。

service ads。

and XXX a specific product。

while service ads promote a specific service。

nal ads。

on theother hand。

promote an entire company or industry。

PSAs。

on the other hand。

are mercial messages that aim to educate andinform the public on important issues such as health。

safety。

and social XXX.⑶The Power of Visual AdvertisingXXX。

The use of colors。

剪辑视频的英文作文模板

剪辑视频的英文作文模板

剪辑视频的英文作文模板英文:As a video editing enthusiast, I have always enjoyed the process of editing videos to create compelling and engaging content. Video editing is a creative and technical process that involves cutting, trimming, and arranging video clips to tell a story or convey a message. There are several key steps involved in the video editing process, including importing footage, organizing clips, adding transitions and effects, and exporting the final product.One of the most important aspects of video editing is the ability to select the best clips and arrange them in a way that flows smoothly and effectively communicates the intended message. This requires a keen eye for detail and a good understanding of pacing and rhythm. For example, when editing a travel vlog, I carefully select the most visually stunning and interesting clips to showcase the destination, while also ensuring that the video maintains a good paceand keeps the viewer engaged.In addition to selecting and arranging clips, video editing also involves adding transitions, effects, andmusic to enhance the overall viewing experience.Transitions help to smoothly move from one clip to the next, while effects can add visual interest and style to the video. For instance, when editing a music video, I may use dynamic transitions and visual effects to complement the rhythm and mood of the music, creating a more immersive and engaging viewing experience.Finally, after all the editing is complete, the final step is to export the video in the desired format and quality. This requires a good understanding of video codecs, resolutions, and file formats to ensure that the videolooks and sounds its best across different devices and platforms. For example, when editing a promotional videofor a client, I carefully consider the target audience and the platforms where the video will be shared, and then export the video in the appropriate format for optimal viewing.Overall, video editing is a dynamic and rewarding process that allows me to unleash my creativity and storytelling skills. Whether I'm editing a short film, a tutorial, or a promotional video, I always strive to create content that captivates and resonates with the audience.中文:作为一个视频剪辑爱好者,我一直喜欢剪辑视频的过程,以创造引人入胜和吸引人的内容。

视频特技外文翻译-其他专业

视频特技外文翻译-其他专业

Video Special EffectsPeng HuangObject-space NPARMeier was the first one who produced painterly animations from object-space scenes[53].He triangulated surfaces in object-space and distributed strokes over each triangle in proportion to its area. Since his initial work, many object-space NPAR systems have been presented. Hall’s Q-maps[29](A Q-map is a 3D texture which adapts to the intensity of light to give the object in the image a 3D look, for example, more marks are made where an object is darker) may be applied to create coherent pen-and-ink shaded system capable of rendering object-space eometries in a sketchy style was outlined by Curtis[15], and operates by tracing the paths of particles traveling stochastically around contours of a depth image generated from a 3D object. See Figure for some addition, most modern graphical modeling packages(3D Studio MAX!,Maya,XSI Soft-Image) support plug-ins which offer the option of rendering object-space scenes to give a flat shaded, cartoon-like appearance. Image-space NPARMost NPAR systems in image-space are still based on static painterly rendering techniques,brushing strokes frame by frame and trying to avoid unappealing swimming which distractsthe audience from the content of the animation. Liwinowicz extends his static method and makes use of optical flow to estimate a motion vector field to translate the strokes painted on the first frame to successive frames[47]. A similar method is employed by Kovacs and Sziranyi[42]. A simpler solution is proposed by Hertzmann[33], who differences consecutive frames of video, re-painting only those areas which have changed above some global(userdefined) threshold. Hays and Essa’s approach[32] builds on and improves these techniques by using edges to guide painterly refinement. See Figure for some examples. In their current work, they are looking into studying region-based methods to extend beyond pixels tocell-based renderings, which implies the trend from low-level analysis to higher-level scene understanding.We also find various image-space tools which are highly interactive to assist users in the process of creating digital non-photorealistic animations. Fekete et al. describe a system[23] to assist in the creation of line art cartoons. Agarwala proposes an interactive system[2] that allows children and others untrained in “cel animation” to create 2D cartoons from images and video. Users have to hand-segment the first image, and active contours(snakes) are used to track the segmentation boundaries from frame to frame. It is labor intensive(usersneed to correct the contours every frame), unstable(due to susceptibility of snakes to local minima and tracking fails under occlusion) and limited to video material with distinct objects and well-defined edges. Another technique is called “advanced rotoscoping” by the Computer Graphics community, which requires artists to draw a shape in key-frames, and then interpolate the shape over the interval between key-frames – a process referred to as“in-betweening” by animators. The film “Waking Life”[26] used this technique. See Figure for some examples.NPAR techniques in image-space as well as commercial video effects software, such as Adobe Premier, which provide low-level effects( slow-motion, spatial warping, and motion blur etc.), fail to do a high-level video analysis and are unable to create more complicated visual effects(. motion emphasis). Lake et al. present techniques for emphasizing motion of cartoon objects by introducing geometry into the cartoon scene[43]. However, their work is limited to object-space, avoiding the complexhigh-level video analysis, and their “motion lines” are quite simple. In their current work, they are trying to integrate other traditional cartoon effects into their system. Collomosse and Hall first introduce high-level Computer Vision analysis to NPAR in “VideoPaintbox”[12]. They argue that comprehensive video analysis should be the first step in the artistic rendering(AR) process; salient information(such as object boundaries or trajectories) must be extracted prior to representation in an artistic style. By developing novel Computer Vision techniques for AR, they are able to emphasize motion using traditional animation cues[44] such as streak-lines, anticipation and deformation. Their unique contribution is to build a video based NPR system which can process over an extended period of time rather than on a per frame basis. This advance allows them to analyze trajectories, make decisions regarding occlusions and collisions and do motion emphasis. In this work we will also regard video as a whole rather than the sum of individual frames. However, their segmentation in “computer vision component” suffers labor intensity, since users have to manually identify polygons, which are “shrink wrapped” to the feature’s edge contour using snake relaxation[72] before tracking. And their tracking is based on the assumption that contour motion may be modeled by a linear conformal affine transform(LCAT) in the image plane. We try to use a more automatic segmentation and non-rigid region tracking to improve the capability of video analysis. See Figure for some examples. Another high-level video-based NPAR system is provided by Wang et al.[69]. They regard video as a 3D lattice(x,y,t) and then implement spatio-temporal segmentation of homogeneous regions using mean shift[14] or improved mean shift[70] to get volumes of contiguous pixels with similar colour. Users have to define salient regions by manually sketching on key-frames and the system thus automatically generates salient regions per frame. This naturally build the correspondence between successive frames, avoiding non-rigid region tracking. Their rendering is based on mean shift guided interpolation. The rendering style is limited to several approaches, such as changing segment colour and placing strokes and ignores motion analysis and motion emphasis. Our system segments key-frame using 2D mean shift, identifies salient regions and then tracks them over the whole sequence. We extract motion information from the results and then do motion emphasis. See Figure for some examples. Some other NPAR techniquesBregler et la present a technique called “cartoon capture and retargeting” in [7] which is used to track the motion from traditional animated cartoon and then retarget it onto different output media, including 2D cartoon animation, 3D CG models, andphoto-realistic output. They describe vision-based tracking techniques and new modeling techniques. Our research tries to borrow this idea to extract motion information, from general video rather than a cartoon, using different computer vision algorithms and then represent this in different output media. See Figure for some examples.NPR application on Sports AnalysisDue to a more and more competitive sports market, sports media companies try to attract audiences by increasingly providing more special or more specific graphics and effects. Sports statistics are often graphically presented on TV during sporting events such as the percentage of time in a soccer game that the ball has been in one half compared to the other half. These statistics are collected in many ways both manually and automatically. It is desirable to be able to generate many statistics directly from the video of the game. There are many products in the broadcast environment that provide this capability.The Telestrator[78] is a simple but efficient tool to allow users to draw lines within a 2D video image by using a mouse. The product is sold as a dedicated box with a touch screen, a video input and video outputs the video with the graphics produced. Typically, four very simple controls such as draw arrow, draw dotted line etc. are provided.视频特技黄鹏Meier 第一个在绘画上创造出三维感觉,他把物体按比例绘制到三维界面上实现在二维介质上实现立体感,有了他的理论工作,许多三维空间的NPAR开始被研发出来。

视频技术处理英语ieee文献翻译

视频技术处理英语ieee文献翻译

关于数字视频处理的一种新的运动估计和分割框架(IEEE文献翻译)学院:电子与控制工程学院专业:控制工程课程:数字视频处理姓名:何叶学号:2012232010任课教师:徐琨完成时间:2013.6IEEE文献翻译:关于数字视频处理的一种新的运动估计和分割框架P. De Smet I. BruylandGhent University (RUG), TELIN=TW07V,Dep. of Telecommunications and Information Processing,Sint-Pietersnieuwstraat 41,B-9000 Ghent, Belgium.e-mail: pds@ telin.rug.ac.be, ib @ telin.rug.ac.be摘要在本文中我们给出一个新的数字视频序列处理框架的概述。

我们将讨论如何结合利用多种图像处理和计算机视觉组件来获得一个(半)自动划定或跟踪出现在数字视频图像序列中的移动物体。

我们将举例说明目前图像分割和视频跟踪的研究成果,并简要讨论其主要的应用领域。

1 绪论本文讲述数字视频序列运动估计和分割框架,我们的目标是开发一种自动分割和跟踪视频序列中的对象以及他们的运动轨迹的方法。

在以下的章节中,我们将首先讨论框架的总体结构。

之后我们将简要地讨论框架的每个组成部分。

最后我们将讨论这项研究的实际应用。

2 运动估计和分割框架运动估计,分割和跟踪框架的大致流程如图1所示。

首先,一个静态的、单一的图像分割完成。

此过程将在第三章节进行进一步讨论。

然后,所获得的每一部分的原始运动估计(见第四章节)得以确定。

基于它们运动估计的的相似之处,这些分割部分又被重新组合起来。

这使得一小部分连贯的图像运动区域被识别出来。

最后,在序列行进的同时,追踪组件逐帧追踪运动区域。

另外,如图1所示,我们也可以立即跟踪由用户提供的分割图。

图1:所提出的运动估计,分割和跟踪框架3 静态图像预分割我们用一种称为分水岭分割的技术来获得图像中物体边缘的初始划定。

视频媒体外文翻译文献

视频媒体外文翻译文献

视频媒体外文翻译文献(文档含英文原文和中文翻译)原文:Recent Advances in Peer-to-Peer Media Streaming SystemsABSTRACTRecently, there is great interest in using the peer-to-peer (P2P) network in media streaming. A great number of P2P media streaming systems have been developed. In this paper, we first give a brief survey on some key techniques and algorithms in the field of P2P streaming research. We also analyze the market view of P2P streaming media service, and give a brief descrip-tion about the current mainstream P2P streaming systems deployed in China.I.INTRODUCTIONThe rapid development of the Internet has changed the conven-tional ways that people access and consume information.Besides sending and receiving e-mails, browsing web pages, and downloading data files, people also hope to call telephone, watch movie and TV, and conduct other entertainments via the same Internet. The ideal objective is that anyone can access anything (contents) from anywhere at any time. It is commonly conceived that the next generation Internet should be a multi-media communication network based on the core of IP protocol. Besides traditional data services, other multimedia contents such as voice, image, and video, would also be delivered over the same IP network, among which the streaming media service will play a very more important role.Streaming media enables real-time and continuous delivery of video and audio data in a fashion of “flow”, i.e., once the sender begins to transmit, the receiver can start playback almost at the same time while it is receiving media data from the sender, instead of waiting for the entire media file to be ready in the local storage. Unlike normal data file, a streaming media file is huge, thus requires high channel bandwidth. Moreover, streaming media also carries stringent demand in the timing of packet delivery. The large size of the streaming media as well as its delivery timing requirement causes a streaming media server to be expensive to set up and run. In traditional client/server-based media streaming systems, all clients access the same server resource. In this scenario, on the one hand, the processing power, storage capacity, and I/O throughput of the server may become the bottleneck; on the other hand, large number of long-distance network connections may also lead to traffic congestion, thus cannot afford better quality of service (QoS) comparable with that of other tradi-tional Internet services, such as WWW and FTP, and cannot meet the performance requirements of large-scale real-time media streaming applications, especially in the aspects of scalability, adaptability, fault-tolerance and robustness. To address these problems, recently researchers have pro-posed many solutions, such as IP multicast and CDN (content delivery network). However, both of them need supports from special hardware. For IP multicast network, large-scale multicast-capable routers must be redeployed in the Internet. For content delivery network, a large number of CDN servers should be placed at the network edge, close to any receiver, and cooperate with each other to distribute multimedia data. The costs of infrastructure setup and administration are expensive, and cannot resolve the problems fundamentally. In recent years, Peer-to-Peer (P2P) networking technology has gained tremendous attention from both academy and industry. In a P2P system, peers communicate directly with each other for the sharing and exchange of data as well as other resources such as storage and CPU capacity, each peer acts both as a client who consumes resources from other peers, and also as a server who provides service for others. P2P systems can benefit from their following characteristics: adaptation, self-organization, load-balancing, fault-tolerance, availability through massive replication, and the ability to pool together and harness large amounts of resources. For example, file-sharing P2P systems distribute the main cost of sharing data - bandwidth and storage - across all the peers in the network, thereby allowing them to scale without the need for powerful and expensive servers. P2P systems are originally applied in network file sharing, and have achieved great success, such as Napster, Gnutella, Emule, and BitTorrent. However, different from general P2P file sharing, P2P media streaming poses more stringent timing and resource requirements for real-time media data transmis-sion and rendering, therefore it is needed to provide more restricted functions in the respects of resource management, scheduling, and control. Various P2P media streaming systems have been proposed and developed recently. Even inChina, nowadays there are about more than a dozen of P2P streaming applications de-ployed in the Internet. In this paper, we first give a brief survey on some key research issues and algorithms of P2P streaming systems, and then analyze and summarize the current status and development trend of P2P streaming market in China.II. RESEARCH PROGRESS OF P2P MEDIA STREAMINGA simple and straightforward way of P2P streaming implemen-tation is to use the technique of application-layer multicast (ALM). With ALM, all peer nodes are self-organized into a logical overlay tree over the existing IP network and the stream-ing data are distributed along the overlay tree. The cost of providing bandwidth is shared among the peer nodes, reducing the burden of the media server. In application-layer multicast, data packets are replicated and forwarded at end hosts, instead of at routers inside the network. Compared with IP multicast, application-layer multicast has several advantages. On the one hand, since there is no need for supports from routers, it can be deployed gradually based on the current Internet infrastructure; on the other hand, application-layer multicast is more flexible than IP multicast, and can adapt different distribution demands of various upper level applications.Thus, how to construct and maintain an efficient ALM-based overlay network has became one of the key problems of P2P streaming research. To address this problem, mainly three questions should be answered. The first relates to the P2P network architecture, i.e., what topologies should the overlay network be constructed? The second concerns routing and scheduling of media data, i.e., once the overlay topology is determined, how to find and select appropriate upstream peers from which the current peer receives the needed media data? The third is membership management, i.e., how to manage and adapt the unpredictable behaviors of peer joining and departure?Recently, several P2P streaming systems and algorithms have been proposed to address the above issues. From the view of network topology, current systems can be classified into three categories approximately: tree-based topology, forest-based (multi-tree) topology, and mesh topology. In the following we give a brief summarization of P2P streaming techniques accord-ing to this classification.2.1 Tree-based topologyThe typical model of tree-based P2P streaming system is PeerCast. In PeerCast, nodes are organized as a single multicast tree, where the parent provide service only directly to its sons.The node joining and departure strategies used in PeerCast are simple. For node joining, a new node n first request services from the root node S. If the S has enough resources, it provides service for n directly; otherwise, S redirects the request of n to one of its sons. The son then repeats this process, until the parent of n is found. Since each node only maintains the information of its parent and sons, unbalanced tree may be constructed.Generally, there exist four route selection strategies in PeerCast: random selection, round-robin selection, smart selection accord-ing to physical placement, and smart selection according to bandwidth. To achieve a balanced multicast tree, custom routing policy should be chosen carefully for individual peer node.ZIGZAG is another tree-based P2P streaming system which can construct more balancedmulticast tree. ZIGZAG organizes receivers into a hierarchy of bounded-size clusters and builds the multicast tree based on that. The connectivity of this tree is enforced by a set of rules, which guarantees that the tree always has aheigh O and a node degree O(k), where N is the number of receivers and k is a constant. Furthermore, the effects of network dynamics such as unpredictable receiver behaviors are handled gracefully without violating the rules. This is achieved requiring a worst-case control overhead of O receiver and O(k) for an average receiver.Other tree-based P2P streaming systems also include NICE, Overcast , and Bayeux .2.2 Forest-based topologyConventional tree-based multicast is inherently not well matched to a cooperative environment. The reason is that in any multicast tree, the burden of duplicating and forwarding multicast traffic is carried by the small subset of the peers that are interior nodes in the tree. Most of the peers are leaf nodes and contribute no resources. This conflicts with the expectation that all peers should share the forwarding load.To address this problem, forest-based architecture is beneficial, which constructs a forest of multicast trees that distributes the forwarding load subject to the bandwidth con-straints of the participating nodes in a decentralized, scalable, efficient and self-organizing manner. A typical model of forest-based P2P streaming system is SplitStream. The key idea of SplitStream is to split the original media data into several stripes, and multicast each stripe using a separate tree. Peers join as many trees as there are stripes they wish to receive and they specify an upper bound on the number of stripes that they are willing to forward. The challenge is to construct this forest of multicast trees such that an interior node in one tree is a leaf node in all the remaining trees and the bandwidth constraints speci-fied by the nodes are satisfied. This ensures that the forwarding load can be spread across all participating peers. For example, if all nodes wish to receive k stripes and they are willing to forward k stripes, SplitStream will construct a forest such that the forwarding load is evenly balanced across all nodes while achieving low delay and link stress across the system.Striping across multiple trees also increases the resilience to node failures. SplitStream offers improved robustness to node failure and sudden node departures like other systems that exploit path diversity in overlays. SplitStream ensures that the vast majority of nodes are interior nodes in only one tree. Therefore, the failure of a single node causes the temporary loss of at most one of the stripes (on average). With appropriate data encodings, applications can mask or mitigate the effects of node failures even while the affected tree is being repaired.Besides SplitStream, there are many other forest-based systems. Examples include building mesh-based tree (Narada and its extensions, and Bullet ), leveraging layered coding (PALS ), and multiple description coding (CoopNet ).2.3 Mesh topologyIn conventional tree-based P2P streaming architectures, at the same time a peer can only receive data from a single upstream sender. Due to the dynamics and heterogeneity of network bandwidths, a single peer sender may not be able to contribute full streaming bandwidth to a peer receiver. This may cause serious performance problems for media decoding and rendering, since the received media frames in some end users may be incomplete.In forest-based systems, each peer can join many different multicast trees, and receive data from different upstream senders. However, for a given stripe of a media stream, a peer can only receive the data of this stripe from a single sender, thus results in the same problem like the case of singletree.Multi-sender scheme is more efficient to overcome these problems. In this scheme, at the same time a peer can select and receive data from a different set of senders, each contributing a portion of the streaming bandwidth. In addition, different from the multi-tree systems, the sender set members may change dynamically, due to their unpredictable online/offline status changes, and the time-variable bandwidth and packet-loss rate of the Internet. Since the data flow has not a fixed pattern, every peer can send and also receive data from each other, thus the topology of data plane likes mesh. The main challenges of mesh topology are how to select the proper set of senders and how to cooperate and schedule the data sending of different senders.Examples of mesh-based multi-sender P2P streaming system include CollectCast, GnuStream , and DONet(CoolStreaming).CollectCast puts its emphasis mainly on the judicious selec-tion of senders, constant monitoring of sender/network status, and timely switching of senders when the sender or network fails or seriously degrades. CollectCast operates entirely at the appli-cation level but infers and exploits properties (topology and performance) of the underlying network. Each CollectCast session involves two sets of senders: the standby senders and the active senders. Members of the two sets may change dynamically during the session. The major properties of CollectCast include the following: (1) it infers and leverages the underlying network topology and performance information for the selection of senders. This is based on a novel application of several network performance inference techniques; (2) it monitors the status of peers and connections and reacts to peer/connection failure or degradation with low overhead; (3) it dynamically switches active senders and standby senders, so that the collective network performance out of the active senders remains satisfactory.GnuStream is a receiver-driven P2P streaming system which is built on top of Gnutella. It features multi-sender bandwidth aggregation, adaptive buffer control, peer failure or degradation detection and streaming quality maintenance. GnuStream is aware of the dynamics and heterogeneity of P2P networks, and leverages the aggregated streaming capacity of individual peer senders to achieve full streaming quality. GnuStream also per-forms self-monitoring and adjustment in the presence of peer failure and bandwidth degradation.Recently, DONet implemented a multi-sender model by introducing a simpler and straightforward data-driven design, which does not maintain an even more complex structure. Thecore of DONet is the data-centric design of streaming overlay, and the Gossip-based data schedule and distribution algorithm.In the data-centric design of DONet, a node always forwards data to others that are expecting the data, with no prescribed roles like father/child, internal/external, and upstreaming/downstreaming, etc. In other words, it is the availability of data that guides the flow directions, while not a specific overlay structure that restricts the flow directions. This data-centric design is suitable for overlay with high dynamic nodes.Gossip algorithms have recently become popular solutions to multicast message dissemination in P2P systems. In a typical gossip algorithm, a node sends a newly generated message to a set of randomly selected nodes; these nodes do similarly in the next round, and so do other nodes until the message is spread to all. The random choice of gossip targets achieves resilience to random failures and enables decentral-ized operations. Similar to the related work, DONet employs a gossiping protocol membership management. The data sched-ule and distribution method used inDONet is also partially motivated by the gossip concept. It uses a smart partner selection algorithm and a low-overhead scheduling algorithm to intelligently pull data from multiple partners, which greatly reduces redundancy. Experiments show that, compared with a tree-based overlay, DONet can achieve much more continuous streaming with comparable delay.III. P2P STREAMING IN CHINASince the first practical P2P streaming media system was born, P2P streaming service has experienced a significant growth in China, especially in the year 2005 and 2006. According to a market report, over more than 12,000,000 Internet users have accessed P2P streaming service or downloaded P2P streaming software in China. It is predicted that by the end of the year 2006, this number can take a growth to above 25,000, 000. Facing such a large pre-profitable market, till now there are at least over 15 organizations that are providing P2P or likely streaming services. With the most representative, PPlive, PPstream, Mysee, ROX and UUsee have taken over 80% of the current market share. In the rest of this section, we will analyze the market view of P2P streaming media service, and then give a brief introduction to the current mainstream P2P media streaming systems deployed in China.There are three reasons which cause P2P media streaming service so popular in China in recent years. Firstly, thanks to the rapid advance of audio and video compression technologies, users can easily have access to streaming media in a very low bit rate. More and more multimedia productions, TV clips, and movies are full of the whole Internet. This makes the P2P streaming service providers easier to get enough media sources for service than before. With the various and abundant supply of media contents, service providers can attract more and more clients. The larger the client number, the easier to make test of software and services. Secondly, compared with the traditional way of watching video from the Internet, such as VOD, users can get more satisfied quality of service in current bandwidth-limited network environment. Finally, by the growth of users’s network access bandwidth, they demand on more luxury experience, not simply on text and pictures, but more on fluent and high-definition videos. Users’ trend makes a large roomage for P2P streaming service to grow.Although P2P streaming service has achieved a considerable user experience and definitely it would have a bright future, there are still several issues need to pay attention to. First, current service providers have not found any distinct business models yet. Currently, almost all P2P solution vendors are providing TV program/movie broadcastings free of charge. Obviously, it is not practical for the service provider to charge the users in the time of promoting the service. In the starting period, developing user numbers and gaining subscribers are the key points but earning profits. Second, P2P streaming service providers should face the challenge of copyright. As we’ve just mentioned, some P2P vendors provide TV/movie broadcasting using third party contents without checking their legal status. For long term development, service providers must make cooperation with content providers to make a twin win. ThirdlyP2P streaming service providers must face the sur-veillance from the Internet service providers (ISPs) and govern-mental authorities. On the one hand, the purpose of P2P is to maximize the usage of bandwidth resource, however, to the opposite, the bandwidth spewing caused by such applications often makes the ISPs feel intolerable. ISPs usually take rejec-tive actions, such as limiting the application bandwidth or even blocking the application from running on the Internet. However, limiting orblocking is not the most proper way to solve the problem, and the conflicts between the ISPs and P2P streaming service providers will be in existence for a certain while. On the other hand, being regarded as a new media trend on the web, governmental authorities must take surveillance on P2P stream-ing service to guarantee the orderliness of the industry. By the two sides of surveillance, P2P streaming service providers must play the game prudentially., invested by Soft Bank HK, which is acknowl-edged as the number one in terms of subscribers in China, was founded in the early 2005. PPlive has very stable playing quality, and it seldom changes the player’s state to buffering during playing. When watching a new channel, the average waiting time from searching to playing is about 35s to 55s. PPlive provides over 200 channels, categorized by Provincial TV stations, Sports, Cartoon, Entertainment, HK films, Gaming, Movies etc, but very few programs of overseas TV stations. PPlive currently only supports broadcasting, and almost all the program bit rate is between 300kbps~400kbps with media codec like Windows Media Video (.wmv) or Real Media (.rm). Its program timetable is both shown on the website and displayed at the client player. Advertising commercials is supported by the client. Worth to be mentioned, PPlive broadcasted Supergirl Contests in 2005 and it was reported that the concurrent online users hit a record of 500k for the final contest. Though the popular users it has, some contents PPlive provides are lack of copyright, which may be a hidden trouble for its long term development.PPstream, which is founded by two engineers in Sichuan Province, was announced also in the year 2005. Compared with PPlive, PPstream has similar functions but higher connecting speed. Usually when opening a new channel, the average waiting time is about 25s~45s, and its watching fluency is also as good as PPlive. PPstream provides around 90 channels, categorized by Phoenix TV, Wenguang TV, Sports, Entertainment, Movie, TV drama series, Gaming & cartoon, Music and radio channel, and etc. PPstream currently broadcasts Windows Media Video coded QVGA and CIF quality videos with bit rate around 300kbps~440kbps. Its client software supports channel list and timetable shown aside the player, advertising commercials are also supported. It has been reported that PPstream will have cooperation with some ISPs for higher performance, and its market policy seems more steady and long-ranged.Mysee, invested by aurora, which was founded in late 2005, is regarded as a later comer. But Mysee grows quickly in the year 2006. Now, by numbers of media reports, it is very famous on the Internet. Mysee supplies the same video codec like PPstream, but sometimes the connecting speed and playing quality may not be as good as that of PPstream and PPlive. It currently broadcasts around 90 channels which are categorized by news, movie, TV drama series, sports, entertainment, music, information, Cartoon and science. Mysee does not provide a client application player to view programs, all the channels are viewed in the Microsoft IE browser, channel list and timetable are both displayed on its website. This way may be easy for the service provider to arrange contents that recommended to the user, but lack of user glutinosity. It is reported that Mysee has near one year good cooperation with and Hunan TV station for video broadcasting. It can be predicted that with preponderance in cooperating with TV stations and ICPs, Mysee would earn a more considerable market share.Roxbeam, used to be called CoolStreaming, is regarded as the first practical P2P streaming software. CoolStreaming was devel-oped in late 2004. It gave a reliable model of P2P streaming. But CoolStreaming was forced to close down due to law suits regarding the content in early 2005. Currently Roxbeam is supported by SoftBank Japan. It not only supplies P2P streaming service,but integrates online community called LeiKe and chatting services into the client software. Users can watch not only broadcasting program but short video clips via the VOD service. Roxbeam tries to provide various video recourses to its user, and its goal is not simply providing a P2P streaming service, but to provide an online video sharing and communication platform. Obviously, Roxbeam has an even grander blueprint, but whether this blueprint can come true is to be proved by the market.UUsee, which is invested by SIG, formed in mid 2005, is also a new power in the P2P streaming service. Having good relationship with CCTV, UUsee has more preponderance than other companions on program copyrights, which can help them much in living broadcast of large-scale activities and programs.UUsee provides about 100 channels on its client player which is categorized by UUsee recommendation, entertainment, sports, movies, TV drama, fashion, cartoon, gaming, science, social news, civil TV stations and etc, channel list and time table are shown friendly on the client player. UUsee also provides thousands of VOD programs on its website, which can effec-tively increase its adhesive ability to the users. By the newest data collection from ACNielsen, during the living broadcast of CCTV’s 2006 Spring Festival Celebration, the UU see’s user number at the peak time has met the amount of 400,000, which is the largest number from the authority’s report. By the daily reach statistic from (http://www. ), in the recent half year, UUsee and PPlive take the first two chairs in the competition, followed with PPstream and Mysee, Roxbeam takes the last. It could be judged that the World Cup in June and Super Girl from May to September contribute more audiences to the Service providers.Other P2P streaming service providers like QQLive, Pcast, TVants, Poco, 51TV and so on are doing the same contribution to this market. Chance is equal to every competitor, whether they can achieve all depends on the market choice.IV. CONCLUSIONRecently, P2P streaming has attracted a lot of attentions from both academy and industry. Various P2P media streaming algorithms have been studied, and the systems have been developed. Nowadays about more than a dozen of P2P streaming systems have been deployed in China. In this paper, we first give a brief survey on the progress of P2P streaming research, bring forward some fundamental problems for P2P streaming application development, and review several solu-tions ever proposed to address the problems. Furthermore, we study the factors which can impact the trends of P2P stream-ing market, and make a brief summary for the current P2Pstreaming market progress in China.译文:P2P视频点播系统的最新进展高文,霍龙慑,付强摘要近年来,人们对使用P2P视频点播系统越来越感兴趣,并开发出了大量的P2P媒介瞬时系统。

影视视频剪辑外文文献翻译字数3000多

影视视频剪辑外文文献翻译字数3000多

文献出处:Belk R. The research of film and television video editing techniques [J]. Qualitative Market Research: an international journal, 2016, 8(2): 132-141.原文The research of film and television video editing techniquesBelk RAbstractVideo clip is indispensable to film and television art important constituent, it to a certain extent determines the communication effect of film and television works. Practice has proved that the mature video editing skills can make the film and television work more attractive. Based on the video clip of the related theory on the basis of further explore some video clips of the skill. The film and television works has become the general public today to be reckoned with in the important part of everyday life. The French cafes in Paris the limier brothers showed his first film and television works, marked the film was born. But when the limier brothers just use camera to record what happened around, didn't realize the important role of clips, until the American director Griffith began using a shooting like a shot, video, film and television works of clip art to be the contact and appreciate the people gradually. Keywords: Video; Processing; Skills1 Related conceptsImages, sound clip is the film in a special and meaningful way selection, decomposition and elsewhere, eventually forming a coherent and fluent, clear theme, artistic appeal. A film and television works even if the previous play, shoot picture and sound collection, cast strong enough, but the lack of the improvement of the video clip and packaging, late also unable to produce the desired viewing effect.At the beginning of video clips in general to go through several steps: cutting, after cutting, fine cut, etc. Early shear is based on directed intention, according to a shooting like a copy of material; After cut is on the basis of the initial shear and supplement for the perfection of video for further; Fine cut is editing personnel on double shear repeated verification, examination, on the basis of combining with the special means, the montage and be more careful cutting, after repeated revision,editing video editing techniques to achieve director intention, film narrative and the performance of the double goals are met. Excellent editors on a large number of video clips, to try to grasp the integral style of the works, the intention of the director, programs, etc. Also need to make the film and television works stand out, independent, excellent editing procedure and editing skills. Each clip like people blink an eye, editing final purpose is to meet the needs of the audience and recognition, to work, and ultimately achieve good communication effect.2 TechniquesEditing style, namely video director will own intentions to video editors, editors to show the overall understanding and grasp, eventually form their overall creation idea of editing. Some film let the audience the feeling of being in style is not consistent, most part of the reason is the preparation and editing personnel without a unified style. For the realization of the expected effect of viewing and editing personnel in contact with the video material, must first communicate and choreographer, read a script or a shooting copy, and then choose according to the information unified editing style and make it through the whole video. Clip the ultimate goal is to reach and program, structure, content and form of the theme of the harmonious unification, editors must not throw director intention and the creation of the work itself, according to the individual subjective intend to arrange video editing style along with the gender. Editing style is roughly divided into the following categories:2.1 The traditional editing styleTraditional clip is the basic editing video clip gimmick, it is able to prevent confusion between the lens, the role is to ensure a smooth film and television video conversion, 2 it is to make the film and television works the paragraph is clear, the plot of the narrative time, place, detail and clear, avoid to cause the audience confused feeling. The traditional editing style in film editing the proportion is very big, is also must master the basic skills of editing staff.To grasp the traditional editing style need to pay attention to the following three aspects: first of all, the linkage between the lens directions cannot be disorder. Figurewalking direction, the spatial relationship between the characters to maintain consistent, not because of the transformation of the lens for the confused feeling. For example, to show the characters from the lotus pond in the park to seat walk over and characters from the picture on the left side of the entering and going out from the right, then the next shot, the characters go to the seat should be still from the left into the picture. Otherwise it will let the audience feel a sense of confusion on the vision, especially pursuit, fighting like a lens, etc. Secondly, the change in the lens to be coordinated. The so-called "dynamic", "static", is the video similar material group together in rhythm, form a kind of coordination and stable effect. "Move" is commonly used in the characters' bodies, for example, when a man slapped the other one, a lens is held up his hand before, a camera and then have a fan up continuous motion. Or in the push, pull, shake, shift, with movement in the lens, when the lens is from one scene to another scene, the camera can be directly to switch to another direction, speed close pan, forming a natural transition. "Static" are often used during transitions, two relatively stationary camera can bring harmony on the look and feel, at the same time let the audience play imagination and thinking.Finally, omit unnecessary process.In real life, the audience is familiar with the actions or behaviors, in the process of video editing often can be omitted, only a few key action, form a compact clips of coherent effects.2.2 Creative editing styleCreative editing style aims to improve the effect of the film and television works of art to watch, it including the dramatic effect clip, clip and rhythmic effect clip expressive effect. Dramatic effect clip by changing the lens of sequence and timing, change the lens clips point, finally let each shot in the plot, the best time to bring the audience find everything new and fresh, an Epiphany viewing experience. For example, in the film, props repeatedly appears in the film, it is one of the important props will film to a climax. Expressive effect clip that the whole narrative content in fluency, on the basis of editing combined with their own experience, bold selection, focus some analogy to the lens, so that the film formation reveals a certain meaning, apply colors to a drawing atmosphere effects of editing style. Rhythmic effect cliprefers to the lens time is short, the scene changes quickly, let the audience feel urgency, the effect of psychological tension, this clip effect can't disorderly use, must be combined with the overall structure and the characteristics of the plot to use film.3 The video clips rhythmsThe tempo of the film and television works determines its overall style, some works even depend entirely on the rhythm to infect the audience's emotions and feelings. Editing rhythm to organize various art elements in film and television works effectively, depicting the hero image, create a special artistic conception, and effectively push forward the plot. As a result, the overall pace of film and television works to shape the overall artistic atmosphere, improve work grade having the effect that cannot ignore, even decided to work success or failure, rhythm grasp accurate or not is one of the important factors in evaluating a film and television works.The rhythm of the film can be divided into external and internal rhythm. Internal rhythm refers to the contradiction of the plot itself, the change of character psychology and external rhythm refers to the switching speed of the camera itself, often by means of montage, lenses and other sports. In addition to film the overall rhythm, each fragment, the scene also should have certain rhythm, according to the characters' emotions, drama, image to master, in order to ensure that every scene express a complete and accurate. Editing rhythm of film and television works by images of your body movement rhythm, the rate of movement of the camera, the influence of the length of the lens, transform, the narrative plot and characters in the film and television works mood changes speed to a certain extent, determines the merits of the work.To shoot early photographers according to the director of the creation intention, the overall style of the works and their own professional choice taking lens, on the basis of this formed the basis of film rhythm. After the completion of the preliminary work, editing personnel according to the development of the plot "priorities" to form a relaxation of the editing work. Whether editing rhythm, the rhythm of the film as a whole, or the rhythm of the film passage between, to ensure that all of the appropriateness of rhythm, can't let the audience psychological uncomfortable feeling.But throughout the whole style doesn't mean the whole works always keep the same pace, but rather too appropriately according to the plot unfolds, the formation of the climax of paragraphs and low, as the pace of the film to the audience emotional fermentation. In the film "run Lola run", the three paragraphs respectively represents the three end of Laura and her boyfriend, revealed many details of life often different events that can lead to the final result. Film, Laura lost money trying to save for her boyfriend, she's crazy running lens to form a kind of fast rhythm, at the same time, the roller run of the camera and turn the lens in the clock on the wall switch, enhanced narrative sense of urgency and the sense of crisis, the audience cannot help for the lens the fate of the characters in the tension and fear.4 Apply video clip gimmickPicture like a clever can form special effects, editing, from the video images through a meaningful set, you can generate a constant association and the feeling that find everything new and fresh. The film inception, for example, in like manner, using a variety of means of montage, bring the audience into a special trip to fantasy. Like a smooth way between lens, lens the link between the way there are a lot of, for example, fade, fade in, dissolve, to make use of the means to make smooth and natural to switch between scenes, also can be used directly to the switch, jump cut, flashbacks, make the film forming a rhythm quickly, strong impact effect.译文影视视频剪辑技巧研究Belk R摘要视频剪辑是电影、电视艺术不可或缺的重要组成部分,它在一定程度上决定了影视作品的传播效果。

动画设计外文文献翻译

动画设计外文文献翻译

文献出处:Amidi, Amid. Cartoon modern: style and design in fifties animation. Chronicle Books, (2006):292-296.原文Cartoon Modern: Style and Design in Fifties AnimationAmidi, AmidDuring the 1970s,when I was a graduate student in film studies, UPA had a presence in the academy and among cinephiles that it has since lost. With 16mmdistribution thriving and the films only around twenty years old, one could still see Rooty Toot Toot or The Unicorn in the Garden occasionally. In the decades since, UPA and the modern style it was so central in fostering during the 1950s have receded from sight. Of the studio's own films, only Gerald McBoing Boing and its three sequels have a DVD to themselves, and fans must search out sources for old VHScopies of others. Most modernist-influenced films made by the less prominent studios of the era are completely unavailable.UPA remains, however, part of the standard story of film history. Following two decades of rule by the realist-oriented Walt Disney product, the small studio boldly introduced a more abstract, stylized look borrowed from modernism in the fine arts. Other smaller studios followed its lead. John Hubley, sometimes in partnership with his wife Faith, became a canonical name in animation studies. But the trend largely ended after the 1950s. Now its importance is taken for granted. David Bordwell and I followed the pattern by mentioning UPA briefly in our Film History: An Introduction, where we reproduce a black-and-white frame from the Hubleys' Moonbird, taken from a worn 16 mm print. By now, UPA receives a sort of vague respect, while few actually see anything beyond the three or four most famous titles.All this makes Amid Amidi's Cartoon Modern an important book. Published in an attractive horizontal format well suited to displaying film images, it provides hundreds of color drawings, paintings, cels, storyboards, and other design images from 1950s cartoons that display the influence of modern art. Amidi sticks to the U.S. animation industry and does not cover experimental work or formats other than cel animation. The book brings the innovative style of the 1950s back to our attention and provides a veritable archive of rare, mostly unpublished images for teachers, scholars, and enthusiasts. Seeking these out and making sure that they reproduced well, with a good layout and faithful color, was a major accomplishment, and the result is a great service to the field.The collection of images is so attractive, interesting, and informative, that it deserved an equally useful accompanying text. Unfortunately, both in terms of organization and amount of information provided, the book has major textual problems.Amidi states his purpose in the introduction: "to establish the place of 1950s animation design in the great Modernist tradition of the arts". In fact, he barely discusses modernism across the arts. He is far more concerned with identifying the individual filmmakers, mainly designers, layout artists, and directors, and with describing how the more pioneering ones among them managed to insert modernist style into the products of what he sees as the old-fashioned, conservative animation industry of the late 1940s. When those filmmakers loved jazz or studied at an art school or expressed an admiration for, say, Fernand Léger, Amidimentions it. He may occasionally refer to Abstract Expressionism or Pop Art, but he relies upon the reader to come to the book already knowing the artistic trends of the twentieth century in both America and Europe. At least twice he mentions that Gyorgy Kepes's important1944 book The Language of Vision was a key influence on some of the animators inclined toward modernism, but he never explains what they might have derived from it. There is no attempt to suggest how modernist films (e.g. Ballet mécanique, Das Cabinet des Dr. Caligari) might have influenced those of Hollywood. On the whole, the other arts and modernism are just assumed, without explanation or specification, to be the context for these filmmakers and films.There seem to me three distinct problems with Amidi's approach: his broad, all-encompassing definition of modernism; his disdain for more traditional animation, especially that of Disney; and his layout of the chapters.For Amidi, "modern" seems to mean everything from Abstract Expressionism to stylized greeting cards. He does not distinguish Cubism from Surrealism or explain what strain of modernism he has in mind. He does not explicitly lay out a difference between modernist-influenced animation and animation that is genuinely a part of modern/modernist art. Thus there is no mention of figures like Oskar Fischinger and Mary Ellen Bute, though there seems a possibility that their work influenced the mainstream filmmakers dealt with in the book.This may be because Amidi sees modernism's entry into American animation only secondarily as a matter of direct influences from the other arts. Instead, for him the impulse toward modernism is as a movement away from conventional Hollywood animation. Disney is seen as having during the 1930s and 1940s established realism as the norm, so anything stylized would count as modernism. Amidi ends up talking about a lot of rather cute, appealing films as if they were just as innovative as the work of John Hubley. At one point he devotes ten pages to the output of Playhouse Pictures, a studio that made television ads which Amidi describes as "mainstream modern" because "it was driven by a desire to entertain and less concerned withmaking graphic statements". I suspect Playhouse rates such extensive coverage largely because its founder, Adrian Woolery, had worked as a production manager and cameraman at UPA. At another point Amidi refers to Warner Bros. animation designer Maurice Noble's work as "accessible modernism".This willingness to cast the modernist net very wide also helps explain why so many conventional looking images from ads are included in the book. Amidi seems not to have considered the idea that there could be a normal, everyday stylization that has a broad appeal and might have derived ultimately from some modernist influence that had filtered out, not just into animation, but into the culture more generally.There was such a popularization of modern design in the 1940s and especially the 1950s, and it took place across many areas of American popular culture, including architecture, interior design, and fashion. Thomas Hine has dealt with it in his 1999 book, Populuxe: From Tailfins and TV Dinners to Barbie Dolls and Fallout Shelters. Hines doesn't cover film, but the styles that we can see running through the illustrations in Cartoon Modern have a lot in common with those in Populuxe. Pixar pays homage to them in the design of The Incredibles.Second, Amidi seeks to establish UPA's importance by casting Walt Disney as his villain. Here Disney stands in for the whole pre-1950s Hollywood animation establishment. For the author, anything that isn't modern style is tired and conservative. His chapter on UPA begins with an anecdote designed to drive that point home. It describes the night in 1951 when Gerald McBoing Boing won the Oscar for best animation of 1950, while Disney, not even nominated in the animation category, won for his live-action short, Beaver Valley. UPA president Stephen Bosustow and Disney posed together, with Bosustow described as looking younger and fresher than his older rival. Disney was only ten years older, but to Amidi,Bosustow's "appearance suggests the vitality and freshness of the UPA films when placed against the tired Disney films of the early 1950s".That line perplexed me. True, Disney's astonishing output in the late 1930s and early 1940s could hardly be sustained, either in quantity or quality. But even though Cinderella (a relatively lightweight item) and the shorts become largely routine, few would call Peter Pan, Alice in Wonderland, and Lady and the Tramp tired. Indeed, the two Disney features that Amidi later praises for their modernist style, Sleeping Beauty and One Hundred and One Dalmatians, are often taken to mark the beginning of the end of the studio's golden age.In Amidi's view, other animation studios, including Warner Bros., were similarly resistant to modernism on the whole, though there were occasional chinks in their armor. The author selectively praises a few individual innovators. A very brief entry on MGM mentions Tex Avery, mainly for his 1951 short, Symphony in Slang. Warner Bros.' Maurice Noble earns Amidi's praise; he consistently provided designs for Chuck Jones's cartoons, most famously What's Opera, Doc?The book's third problem arises from the decision to organize it as a series of chapters on individual animation studios arranged alphabetically. There's at least some logic to going in chronological order or thematically, or even by the studios in order of their importance. Alphabetical is arbitrary, rendering the relationship between studios haphazard. An unhappy byproduct of this strategy is that the historically most salient studios come near the end of the alphabet. After chapters on many small, mostly unfamiliar studios, we at last reach the final chapters: Terrytoons, UPA, Walt Disney, Walter Lantz, Warner Bros. Apart from Lantz, these are the main studios relevant to the topic at hand. Amidi prepares the reader with only a brief introduction and no overview, so there is no setup of why UPA is so important or what contextDisney provided for the stylistic innovations that are the book's main subject.译文现代卡通,50年代的动画风格和设计Amidi, Amid在20世纪70年代,当我还是一个电影专业的研究生时,美国联合制片公司UPA就受到了学院和影迷们的关注。

英语影片字幕的翻译的论文

英语影片字幕的翻译的论文

英语影片字幕的翻译的论文英语影片字幕的翻译的论文论文关键词英语电影字幕翻译论文摘要翻译英语影片需要译者对原文进行提炼和再加工,即要保持原片的风格,又要力图简明易读,从而使观众更好地欣赏英语影片。

近些年来,随着人们的知识水平的不断提高,人们对大量入境影片的欣赏水平也日益增高。

大量国外影片的引进。

市场对外语片字幕翻译的需求不断增长。

但是当前英语影片字幕翻译的标准很不统一。

这无疑会影响影片的质量,本文旨在针对一些常见的问题,就如何翻译英文字幕进行探讨。

一、保持原片风格英文电影字幕翻译应达到的境界应该是尽可能地保持原片的风格,从而让国内观众领略到英文原片的文化内涵和艺术底蕴。

近些年来,国人学习英语的热情日益高涨,国人的英文水平也普遍提高,所以在影片的中文字幕当中保持原汁原味就显得至关重要。

如何达到保持原片风格的这种要求呢?1 准确理解原文,正确传达语意。

中文字幕翻译者必须具备较高的英语水平,做到准确理解英文原文。

译者若遇到翻译不通或不确定的地方应仔细查阅工具书,不应望文生义,否则会影响原文的真正含义。

例如,sweet meat是英文中很常见的一个表达方式,意义为“密饯”,但在某部影片当中却有人翻译成了“甜肉”。

英文中还有一些相似表达方式,诸如红肉“red meat”,译为牛羊肉:白肉“white meat”译为鸡肉。

在电影《公主日记2》中,主人公在讨论昆虫时提到的英国着名生物学家“大卫·艾登堡”竟被翻译成“福尔摩斯”。

影片《钢木兰》有这样一幕:谢尔比对妈妈不满,她认为妈妈为自己的婚礼请了九个女傧相过于虚荣,就引用了爸爸引用过的一位诗人的话来讽刺她:an ounce of pretension is worth a pound 0fmanure,然而译者却将其译成“一份自命不凡能够换得一份收获”,不仅原来的讽刺意味完全消失,而且含义也与原文相距甚远。

它的原文意思是“一盎司虚荣能换得一磅大粪”,虽然粗俗,却把诗人痛恨虚荣的心情表达得非常到位,直接翻译更能体现原作的本意。

影视拍摄与后期制作外文文献翻译

影视拍摄与后期制作外文文献翻译

影视拍摄与后期制作外文文献翻译(含:英文原文及中文译文)文献出处:John Wiley. Video pre-production and post-production [J]. Frontiers in Psychology, 2014, 5(5):262-271.英文原文Video pre-production and post-productionJohn WileyFilm and television media have become the most popular and most influential media types. From the fantasy world created by Hollywood movies, to the real life that television news focuses on, to overwhelming television commercials, it has profoundly influenced our world. In the past, the production of film and television programs was just a job for professionals. It seemed to be covered with a mysterious veil. For more than a decade, digital technology has fully entered the film and television production process. Computers have gradually replaced many existing video and television equipment and played a significant role in all aspects of film and television production. But until recently, the use of video and television production had always been extremely expensive and specialized hardware and software. It was difficult for non-professionals to have the opportunity to see these devices, not to mention proficiently mastering these tools to produce their own works. With the significant increase in PC performance and the continuous decrease in prices, filmand television production has gradually shifted from the previous professional-grade hardware equipment to the PC platform. The original high-value professional software was gradually transplanted to the PC platform, and the price has become increasingly popular. At the same time, the application of film and television production has also expanded from a professional film and television field to a broader field of computer games, multimedia, networks, and home entertainment. Many practitioners in these professional fields and a large number of film and television lovers can now use their computers to make their own television programs.Many people have come into contact with the production of movies and TV programs. They started with 3D computer animation. Until now, there are many people who understand and even master 3D computer animation. Many books have been introduced in this area, but most of them are not suitable for film and television post production. Understand that related books are also less, and generally only concerned with the operation and use of a certain software. However, they do not pay much attention to the basic flow and principles of film and television post-production. I hope readers can use this book to not only understand and master the software. Use, but also a more comprehensive understanding of the entire process of film and television post-production.1 Film and Television Post Production Overview:The production of film and television shows is a rather complicated process. Due to the diversification of the film and television programs themselves, it can be said that there is a world of difference, from expensive film production to personally produced home videos. Although the use intentions of these programs, the production budgets, and the manpower and material resources invested are all very different, their production processes are quite similar. In general, the production of film and television programs can be divided into pre-production, real-time production. Shooting, and post-production of the three major stages.Pre-production is the stage of planning and preparation. For film production, this process mostly starts with the script, followed by a series of complicated processes such as setting budgets, raising funds, selecting shooting locations, selecting actors, and forming photography groups. For personal creators, this may be nothing more than a whim, and then pick up your camera and take a few minutes of interaction with the surrounding environmental figures. The shooting phase is the process of recording a picture using a camera. At this time, the material shot can be said to be the cornerstone of constructing the final finished movie.When the main shooting work is completed, it reaches the post-production stage. Traditionally, the main task of this stage is editing, and editing the scattered material from the shooting stage into a complete movie. Generally during the shooting of a movie, the material actuallyshot is several times or even tens of times the length of the final cut. The editor wants to pick the most satisfying material from a large amount of material and organizes them in an appropriate way. Post-production also includes the production and synthesis of sounds. Generally only at this stage, when the extra material has been removed, the lenses have been combined in series, the picture and sound have been synchronized, and the full picture of the movie can be seen. Because of the large amount of information and meaning of the film, it is not included in the picture of a certain lens. It is included in the combination of a series of pictures and included in the connection between the picture and the sound. It is no exaggeration to say that the film and television art is very large. To the extent it is manifested in post-production.Traditional movie clips are truly splices, and the resulting film is processed to produce a set of work samples for editing. The editor selects the desired lens from a large number of samples, cuts the film with scissors, sticks them with tape or glue, and then watches the effect of the clip on the editing table. This repeated process of cutting and pasting is repeated continuously. Until the last look when satisfied with the results. This process is still very common until now. Although it may seem primitive, it is non-linear. The editor does not have to work from beginning to end, because he can cut the samples at any time from the middle, insert a shot, or cut out some pictures directly, without affectingthe whole movie. However, this method is powerless for many production techniques. The editor cannot create a special fusion screen between the two shots, nor can he adjust the color of the screen. All these techniques can only be completed during the printing process, and the manual operation of scissors and paste is also inefficient.Traditional TV editing is done on the editing device. The editing equipment usually consists of a player and a video recorder. The editor selects a suitable piece of material through the player, records it on the tape in the recorder, and then looks for the next shot. In addition, the advanced editing equipment also has powerful special effects functions, which can create various screens for fusion and special effects, can adjust the colors of the screen, and can also create subtitles. However, because the tape recording screen is sequential, it is not possible to insert a lens between existing screens, and it is not possible to delete a single camera unless all subsequent screens are rerecorded. So this kind of editing is called linear editing, which brings a lot of restrictions to editors.We can see that while the traditional editing methods have their own characteristics, they have great limitations, which greatly reduce the creativity of the editing staff and waste valuable time in the cumbersome operation process. The digital non-linear editing technology based on computer technology makes the editing method a great space for advancement and development. This technology records material on acomputer disk and uses the computer's calculations and data reading and storage to perform the editing process. It uses a non-linear mode of movie editing, but with simple mouse and keyboard operations instead of scissors and paste-type manual operations, the editing results can be played back immediately, thus greatly improving the production efficiency.2 3D Computer Animation and SynthesisWith the rapid development of film and television production, post-production has shouldered a very important responsibility: the production of special effects lens - special effects lens refers to the shot can not be directly obtained. Most of the early film and television special effects were accomplished through traditional methods such as model making, special photography, and optical synthesis, and they were mainly completed during the shooting and printing stages. The use of computer numbers provides better and more effective means for special effects production. It also enables many special effects that must be completed using models and photographic techniques to be produced through computer technology. Therefore, more special effects have become post-production. jobs. There are two main reasons why a special effect lens cannot be photographed. First, the subject or environment does not exist in real life, or it may not be photographed if it exists, such as a dinosaur or an alien; the second is the subject and environment of theshooting. Although it exists in real life, it cannot be in the same spacetime at the same time. For example, the protagonist of a movie escapes from a dramatic explosion scene.For the first difficulty, we must use other things to imitate the subject of photography. Common means include making models, using human makeup to imitate other creatures and 3D animations of computers. In fact, 3D computer animation is also a model. It is not only that it is a virtual model that exists in the computer. In short, to solve this type of problem, we need to use a method that is out of nothing.The second difficult solution is synthesis. Since the subjects of the shooting are all present, they can be photographed separately and then these shots taken separately are combined. In the past, synthesis relied mainly on techniques for special effects photography and printing, but the rapid development of computer digital synthesis technology made these methods obsolete. The rapid development of special effects movies in recent years has also led to the growth of the entire film industry. The computer digital synthesis technology is very different from the 3D computer animation. It is not a kind of technology itself, but it uses the existing material screens to be combined, and at the same time, a large number of images can be modified and beautified. It can be said that it is a kind of technology that icing on the cake. For television programs, we can often see such a picture. The picture itself is composed of manyunrelated objects. Obviously not through shooting, but through synthesis. For example, many television titles, advertisements, MTV and other programs are like this. The primary escape condition for synthesis at this time is not a sense of reality, but a purely aesthetic and formal sense, but there is not much difference from the synthetic synthesis in the way of synthesis. Through the above introduction, we have roughly summarized the appearance of film and television post-production: using footage from real-life shooting to produce special effects shots through 3D computer animation and synthesis, and then combining the shots together to form a complete film. , And make sounds for the movie. In the following figure we can see the basic flow of digital post-production of television. At present, most of the television programs, such as advertisements, titles, MTVs, and TV dramas are produced in this way. However, in the process of synthesis, the following requirements must be met:(a) The screen is cleanThe clean screen includes the following aspects:1 No clip frame is a complete lens connection, there is no extra thing in the middle. If we did not cut at the end of the shot when we edited it, we took the contents of the next shot at the back, and these things were added due to mistakes. We did not need to. This is the clip frame. And when we connect the two lenses, there is no black field in the middle. This is also a clip frame. One of the most important tasks when editing isusually to cut the screen.2. The editing mode of the lens complete lens is dynamic and static and then static. There is not much to say about moving here, and the lens must be complete when it is docked. The still lens must maintain a certain length, such as 5-8 seconds in the middle scene. Sports shots must have a kick and a kick.3, The moving lens must be settled finally, the moving lens can not be called the end of the screen, so the final point of the lens must be still. Even if you use dynamic connection, the last lens must also be quiet. The subject stops moving or draws.(B) The sound is cleanThis is what the novice usually ignores in the editing, including several aspects:1 Words to completeWhether it is the original soundtrack or the narration, one cannot say one sentence.2 The sound should be pureSpecial noisy sounds cannot be used and must be understood by the audience.3 The volume of the sound should be appropriateThe volume of a program must be unified and cannot be high or low.The proportion of background music volume should be appropriate,so that the sound of background music cannot affect the sound of speech.中文译文视频的前期拍摄与后期制作作者:约翰·威利电影电视媒体已经成为当前最为大众化, 最具影响力的媒体型式。

视频弹幕外文文献翻译

视频弹幕外文文献翻译

视频弹幕外文文献翻译外文文献翻译原文及译文原文Screen popping method and system for videoBackgroundIn the video, can be a way to comment barrage, barrage is to make a comment on the screen drifting from side to side, when a large number of comments drifting away from the screen, the screen effect is similar to flying shooter barrage , screen of the barrage is mainly in the form of message titles, text messages directly covering the video playback screen, the location and time can be freely set users on the network. There may be a lot of subtitles with the video film is superimposed as a result of the barrage when, in this way affect the viewing of video effects, the impact of the prior art way video viewing screen shot of the issue, has yet to come up with effective solutions. DISCLOSURE The present invention provides a method and a barrage of video systems to at least solve the prior art screen shot way affect the video viewing problems.To achieve the above object, according to one aspect of the invention, there is provided a method of video barrage.According to the present invention, the video barrage method comprising: obtaining a first screen video program schedule; obtaininguser to the first screen in the video program Review; generating a barrage of commentdata corresponding to the content; and The second screen shows an image data corresponding barrage.Further, users get the first screen video program reviews include: Get the user to the first screen in the video program Review and comment period, the second screen shows an image corresponding to the barrage of data comprising: The second screen synchronized with the corresponding image data barrage.Further, the simultaneous display on the second screen and the barrage of image data corresponding to include: video generation timetable timeline; acquisition time point corresponding to the comment period on the timeline; and a control point and a second screen at a time image data corresponding to the barrage.Further, users get the first screen video program reviews include: Get on the social networking site related comments and video program content, the second screen shows an image corresponding to the barrage of data comprising: displaying a second screen barrage data corresponding image; and the second screen displays and video program content corresponding to the comment on the social networking site images.Further, access to social networking sites associated with the video program reviews include: preset selection policy to select Review,comments on the second screen image corresponding to the contents include: the second screen shows the selected Review corresponding image.[Further, after the second screen image corresponding to a barrageof data, said method further comprising: the barrage of data corresponding to the image to the social networking site.Further, users get the first screen video program reviews include:Get the user to the first screen in the video program Review; and obtaining review content corresponding user's identity information, displayed on the second screen play screen data corresponding to an image comprising: displaying barrage in the second image data corresponding to the screen; and a user's identity information in the second screen display. To achieve the object, according to anotheraspect of the invention, there is provided a video of the barrage system, which is used in any kind of video present invention provides a methodof performing barrage. According to another aspect of the present invention, there is provided a video of the barrage system. The video barrage system comprising: back-end server, for generating and storing data barrage; the first screen, connected back-end server, screen data corresponding to the image for display; and a second screen connected to the back-end server for display image data corresponding barrage.Further, the above barrage system further comprises: a social networking site server, connect with back-end server, where back-endserver as the default policy on the social networking site to obtain data about the videocontent corresponding to the comment server through social networking sites and users Comments or screenshots barrage images posted to social networking sites to spread.The present invention, since the playback of video data and barrage of data provided in the different screen, so you can screen experience through a barrage of fun, but at normal viewing video on another screen, thus solving art screen shot way affect the video viewing problems, and thus to improve video viewing effect and raise the user experience.Brief DescriptionThe drawings constitute a part of this application to provide a further understanding of the present invention, exemplary embodiments of the present invention and are used to explain the present invention, the present invention does not constitute an undue limitation. In the drawings:FIG. I is a block diagram of a video of an embodiment according to the present invention barrage systems;FIG. 2 is a schematic view of the interface on the second screen; and FIG. 3 is a flow diagram of an embodiment of the video barrage method of the present invention.DETAILED DESCRIPTIONIt should be noted that, without conflict, the present application examples and examples of features can be combined with each other. Below withreference to the accompanying drawings and described in detail with an embodiment of the present invention.The embodiment of the invention there is provided a video of the barrage system, the following examples provided video barrage system of the present invention are described.FIG. I is a block diagram of an example of a video barrage system according to embodiments of the present invention.Figure I shows, the video barrage system 11 includes a back-end server, the first screen 12 and second screen 13.back-end server 11 for generating and storing data barrage. In general, the back-end server 11 may be a computer. The timing associated with the back-end server can record and store 11 user-published content, and implement video content and users of content. Example embodiments of the present invention includes a video forms of video network video or TV show video.The first screen 12 and 11 are connected back-end server, screendata corresponding to the image for display.The first screen is used to play the video, viewing considerations, generally relatively large, usually a TV screen or PC screen. The second screen 11 is connected to the backend server 13, an image corresponding barrage data for display.The second screen is used to play barrage, energy-saving considerationsand other aspects, in general, smaller than the first screen,usually intelligent terminals and other mobile phone or pad, through its built-in application, you can achieve barrage fly screen display content. In this embodiment, since the playback of video data and barrage of data provided in the different screen, so you can screen experience through a barrage of fun, but at normal viewing video on another screen, thereby achieve improved video viewing effect and raise the user experience. In order to increase the entertainment and interactive barrage system, the system can also include social web server, a social networking site server and back-end server connection, the interface according to preset policies on social networking sites get through social networking sites on the first server back-end server On one screen of video content corresponding comment, and a new generation of user comments or screenshots barrage of images posted to social networking sites to spread.In some special circumstances, may be the first screen and thesecond screen integrated into a screen, that is, an area of the screenis the first screen, the other area is the second screen.FIG. 2 is a schematic view of the interface on the second screen. 2, includes the following components:barrage of screen background skin 201 can be set to change color or background image, the operator can also set in this region and in the broadcast video-related advertising picture as a background. 202 is a hidden toolbar toolbar, when the user touches the screen, the toolbar appears, click on the 201 area again, the toolbar is hidden. Toolbar button 203 is provided with a release, screenshots sharing button 204and the switching button 205.When the barrage content users publish button 203 is pressed, this button is released through text, voice, pictures and short video content. Short video is a video server automatically sliced according to the timeline generated tens of seconds of video files.Alternatively synchronized to the social networking site users. pressing screenshots sharing button 204, the processor automatically on the second screen of the second panel being barrage screen capture content generated image, shared by the user to the respective social networking sites.switching button 205 for grafting barrage of forms, including theform of horizontal scrolling marquee and vertical list is updated form, bubbling animation and so on.barrage content 206 may be a text message, voice message, a short video message or picture message, further, it may also include a user ID or user avatar. If users click on the barrage, you can see the detailsof the publisher, and thus can be other social events (reply, greeting, add friends, gifts, etc.).The present invention further provides a video of the barrage, the methodcan be performed based on the video barrage system.FIG. 3 is a flow diagram of an embodiment of the video barrage method of the present invention.As shown in Figure 3, the video barrage method comprising the steps of S302 to step S308.step S302, the acquisition schedule of the first screen in the video program.by getting the schedule of the first screen of the video program, to ensure that the video program and user comments withStop / J / Ostep S304, users get the first screen video program Review. Content can be uploaded by users, it can be downloaded from the Internet, in particular, can be sent through a mobile terminal user, it can also be found on social sites crawled by certain strategy.In order to ensure the synchronization of video programming and comment in getting users to first screen video program Review, you can also get the comment period corresponding comments, the comment period to ensure that in a subsequent step video program and Exposition corresponding content.Specifically, the synchronization method may comprise the steps of: First, generate timetable timeline. Since the video program iscontinuous, so you can make schedule timeline each time point corresponds to theirvideo. [0052] Then, the acquisition time point comment on thetimeline corresponding time. Since the barrage of data in time is discrete, so you can get each time point barrage data appears.Finally, the control point of the second screen in time with the barrage of data corresponding to the image. For example barrage data corresponding time is I minute and 30 seconds, then, when the video player when I minute and 30 seconds, barrage data corresponding image is displayed on the second screen, you can achieve synchronous video and barrage. In order to further improve the barrage of interactivity can also be content on social networking sites with content related comments that crawl through the various social networking sites and video-related content and comments as barrage content. Specifically, in step S304, you can get on a social networking site-related comments and video program content, for example, to determine whether the comment and video content based on reviews whether a particular site, a particular region, a particular time or contains a specific keyword program-related.Further, in access to social networking sites associated with the video program review content, you can also select the policy according to some content of these comments are selected so as not to cover each screen shot when the screen too. Choose a variety of strategies, such as user level according to the level of choice, according to region selection or random selection.On the other hand, it may be a barrage of data corresponding to the image to the social networking site. Through interaction between the above social networking sites, increasing the fun and interactive barrage. In order to have a better interaction between the user, in this step, you can still get review content corresponding to the user's identity information, such as user avatar and personal information. This information can be acquired for the user to call in a subsequent step.step S306, the generated content and comments barrage corresponding data.The content of the comments set some properties or add picture effects, you can generate a barrage of data.step S308, the display image data corresponding barrage in the second screen.When the aforementioned steps also get a barrage of information corresponding to the user's identity, the step S308 can also display more than one identity. For example, the information is displayed directly on the barrage, or to display the identity information upon receipt of the clicks, and viewing requests otherwise. The current user can further identity information corresponding to the user socialize and interact, for example, add friends, reply, say hello, send virtual gifts and so on. If the foregoing step of acquiring a social networking site-related comments and video program content, then in step S308, the displayimage data corresponding barrage in the second screen can also correspond to the content of the comments on the second screen display images.From the above description, it can be seen that the present embodiment of the invention changes the traditional barrage of messages and video on the same screen overlay, which allows the audience watching the video from overlapping interference.It should be noted that the steps illustrated in the drawing process can be performed in a set of computer-executable instructions, such as computer systems, and, although the flow chart shown in logical order, but in some case, the order may be different from that here steps illustrated or described.Obviously, those skilled in the art should understand that each module or each step of the present invention described above can be used to achieve universal computing device, they can focus on a single computing device or distributed across multiple computing devices network composed, alternatively, they may be implemented in program code executable by a computing device, so that they can be executed by a computing device stored in the storage means, or they are made into respective integrated circuit modules, or They will be produced in a plurality of modules or steps into a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware andsoftware. [0066] The foregoing is only preferred embodiments of the present invention but not to limit the invention, for those skilled in the art, the present invention can have various modifications and changes. Any modification within the spirit and principles of the present invention, made, equivalent replacement, or improvement should be included within the scope of the present invention. ?译文视频的弹幕方法及弹幕系统背景技术在视频中,可以以弹幕的方式发表评论,弹幕就是使评论在屏幕上从一端飘向另一端,当大量评论从屏幕飘过时,画面效果类似于飞行射击游戏中的弹幕,视屏中的弹幕主要是字幕形式的留言,留言的文字直接覆盖在影片播放的屏幕上,位置及时间可以网络中的用户自由设置。

视频的后期制作与编辑外文文献翻译2014年译文3400字

视频的后期制作与编辑外文文献翻译2014年译文3400字

文献出处: Alessandrini P. The Post-production and Editing of Film and Television Video [J]. Review of Film Video, 2014, 15(2): 261-307.(声明:本译文归百度文库所有,完整译文请到百度文库)原文The Post-production and Editing of Film and Television VideoAuthor: Alessandrini PAbstractIn the process of film and television production, the production and editing of the late is as important as the prophase work. Pre-production is the film and television planning and preparation, and as the film and television post-production be finished before the end of the procedure, it is combined with sounds, words and images such as a variety of audio-visual methods in the integration of professional high comprehensive creation. A good film and television works or not, depends largely on the later production and editing, according to the requirements of the film and television production, to choose the right means of post-processing, society. with for film and television program. In this paper, from the understanding of the film and television post-production and editing, video post-processing technology is analysed and the correct selection method.Keywords: knowledge; Technology; chooseFilm and television production is divided into three stages, namely the preparation, the actual filming and post-production. Preparation of film and television works planning and preparation; The actual filming scenes, film and television works in the recording process; And post-production is on the basis of the previous good pictures, according to the requirement of the film and television works expressive force, etc, to a certain animation and synthesis technology to make special effect lens, then each set of lens splicing clip together, made into a full and there's a very rich visual and sound effects. As a result, the film and television late work directly affects the whole quality of film and television works.1 The understanding of film and television post-production and editingBoth the audience and producer of the film and television works, more and more people begin to pay attention to the film and television works in scope, therefore the post-production and editing of the film and television works work also gradually improve requirements, through long-term exploration is gradually perfect. In the process of film and television post-production, more traditional methods were taken editor based on good material to choose the appropriate lens, will be extra footage off, then the joining together of lens, completed the film and television works. Generally in the filming of a film and television work, editors to choose from a large amount of material the best represents material characteristics and theme film and television works, and then organize, and the number of the original material tend to reach several times or even dozens of times the length of the film and television works. It should be pointed out that, "clips" is not equal to simple to work "cutting" of theoriginal material, clip a technique, which is including splicing, more emphasis on is the creation of the creator consciousness, and is not a simple "cut" and "pick up", but rather to visual and sound effects on the camera and editing. From the perspective of the development of digital technology, enhance its quality and properties of film and television works, to a large extent depends on the development of digital technology. And from the point of development prospect, post-processing, film and television works mainly in order to strengthen the expressive force of work. Mainly in the development of linear editing system gradually to stall, and nonlinear editing system has a good development situation, this makes the film and television production more convenient and quick. The integrated use of various software, or a fantastic visual and sound effects to the film and television works, also brought a great visual impact to the audience. At the same time because of the rapid development of the PC platform, prices are lower, film and television post-production is from the previous professional hardware gradually shift to the PC platform, the application of film and television production has not only confined to the movie and TV is made, but more to the games and the Internet, multimedia, entertainment, family, and more areas. Film and television post-production and clips, in the process of development of modern media is playing a very important role.2 Post-production and cutting technologyIn today's digital technology rapid development, the film and television post-production and editing process technology has developed to the point of very mature, also all kinds of software and hardware development and development, in particular, more film and television post-production provided convenient conditions, let the audience to appreciate more dazzling audio-visual feast. Based on the purposes of the hardware and software can be roughly divided into the high school low-end, and each level has different requirements and the hardware configuration, in general, the more high-end products for hardware requirements will be high, the corresponding output video quality is relatively higher. So according to the theme of the film and television works and asked to choose the appropriate level of software and hardware, nature can achieve twice the result with half the effort. Tell from the processing techniques, the film and television post-production can be divided into linear editing and nonlinear editing, both in today's post production and editing work supplement each other, at the same time have their own advantages and disadvantages.2.1 Linear editingThe so-called linear editing is recorded at the early stages of the film and television works, after completing a job editor for material selecting, editing, and then new lens together. Connect the new lens method generally used combination of editing, if you want to replace a part of the lens is with insert editing but need to replace of the same length. The disadvantages of this method is mainly it needs to be made in chronological order, it is difficult to on the basis of the original material to shorten or lengthen the middle part, unless it is recorded that after a while. This is a traditional film and television works edit mode.2.2 Nonlinear editingNonlinear editing is relative to the case of linear editing, if linear editor is editing according to time sequence, then the nonlinear editor nature is achieved by leaps and bounds, editor of technology. It is the perfect combination of computer technology and digital television technology, almost all of its jobs are done in the computer, for very little dependence on external devices, for the use of the material also can choose at will, rather than as according to the time order of the single linear editing, choose the material is very fast and convenient, the compiling system gives the producer of the film and television works great creative freedom space, so it is very suitable for the montage film editing and stream-of-consciousness way of thinking, and the vast majority of the film and television works in the modern application of most of the nonlinear edit system.3 Choose appropriate editing and production methodsTo choose the right means of film and television post-production and editing linear editing and nonlinear editing their respective advantages and disadvantages, decided the both in the process of production of film and television works complementary relationship, both be short of one cannot. Only according to the requirement of the film and television works and characteristics, select the appropriate technology and methods, to present a good film and television works to the audience. In feature films, documentaries, advertising, and in the process of the opening of film and television program production, usually use edit mode is nonlinear editing. This choice is because in the opening of film and television programs, advertising, you need to theme is very outstanding, let the audience impressed is shown in the video content, so there are relatively high requirements for image processing. In this a few kind’s films, often need special treatment on subtitles, color, heavy use of multilayer superposition of images, such as sports, transparent effect, to fast, slow motion picture carefully. In addition, in this a few kinds of film and television work, also often use longer than 5 seconds, linear editing is hard to do such a request, and with nonlinear editing, is much more easy to implement.Live entertainment, interview programs generally use the traditional linear editing, etc. Because live, live straight record the response speed recording equipment used by the program requirements in a timely manner, recorded from the beginning would have been recorded to the end of the show, middle can't stop, don't make any more mistakes. Therefore, this type of program recording with traditional linear editing. But for programs have special requirements, if you need to playback or processing of images, also combined with a nonlinear editing to use, in order to achieve better visual effect.For news of the production process, application is given priority to with linear editing. Because the news is mainly to the audience more news, current affairs and to be in a relatively short period of time will spread out, a lot of information on the screen, and it need to shorten the time of each lens, press the requirements of each lens length is not greater than 5 seconds. Because the news authenticity requirement, in the process of dealing with almost no special effects shots, but the lens group together directly, most is the late dubbing voice or the same period and late dubbing mix, therefore in the process of news production using linear editing is perfect. Linearediting is commonly component connection use more, which reduces the loss of signal, program quality is guaranteed.4 Art characteristics of post-productionAfter the late preliminary editing, careful editing and delicate clipping three stages, complete the whole editing tasks of film and television works. And the process of editing, editors will not work in order to achieve the length of several of the camera is rigidly stitching together or stiffly for use, the effect will be a lens to use special effects, in the process of handling of material, editors to consider action transitions, modelling, the art characteristic of three aspects of time and space to deal with the lens.4.1 Action transitionsAction is an important way of scene transitions. In order to make the lens more in line with the law, are ornamental, in the process of editing action selection and combination of carefully and must be handled. Due to the character's psychological action, language, action and body movements, and many other body movements to express people's personality traits, character emotions, has played a great role in promoting the plot development, so the editor for processing is attached great importance to the action, at the same time, it also can change the monotonicity of action. For example, is the most widely use slow motion, with slow motion to highlight action or faster action.4.2 Plastic artsUse sharpening, mask and feature technology to optimize the film and television in the modelling, can play a rendering, more stress the theme of the film and television works. Editor, often through the character modelling, modelling and environmental modelling picture processing, to achieve the desired effect. Environmental modelling to neutral shade, to coincide with the style of the film and television works, not too thick or too light, should not only have the rendering effect of film and television works, but not a presumptuous guest usurps the host's role. Film and television works with images of the stereo configuration of the modelling is going through the picture, surface structure, space depth, the outline form and features such as color change, multi-angle editor, to enrich the connotation of the film and television works.4.3 The space and timeTime and space is a big background of film and television works, also became editor for editing, and action and modelling processing mainly to highlight the theme of the film and television works, apply colours to a drawing atmosphere and so on, in order to achieve the purpose of the audience and the formation of resonance, let the audience was impressed by the film and television works. In addition to deal with the film and television works should reflect the time characteristic, will handle with these time in line with the appearance and characteristics of the space. Editor to correctly use all kinds of editing effects, reflects the different aspects of the art features, in order to fully reflect the creative intention of writer and director, the audience enjoy the transformed literary artistic conception of the script into the perfect audio-visualfeast.5 ConclusionIn the process of creation of film and television works, post-production and editing work should be more and more attention, this is not only the demands to the film and television post-production workers, also is a good film and television works the most basic guarantee.译文影视作品视频的后期制作与编辑亚历山德里尼摘要在影视制作过程中,后期的制作与剪辑和前期的工作一样重要。

动画制作外文翻译文献

动画制作外文翻译文献

动画制作外文翻译文献(文档含中英文对照即英文原文和中文翻译)译文:动作脚本ActionScript是 Macromedia(现已被Adobe收购)为其Flash产品开发的,最初是一种简单的脚本语言,现在最新版本3.0,是一种完全的面向对象的编程语言,功能强大,类库丰富,语法类似JavaScript,多用于Flash互动性、娱乐性、实用性开发,网页制作和RIA应用程序开发。

ActionScript 是一种基于ECMAScript的脚本语言,可用于编写Adobe Flash动画和应用程序。

由于ActionScript和JavaScript都是基于ECMAScript语法的,理论上它们互相可以很流畅地从一种语言翻译到另一种。

不过JavaScript的文档对象模型(DOM)是以浏览器窗口,文档和表单为主的,ActionScript的文档对象模型(DOM)则以SWF格式动画为主,可包括动画,音频,文字和事件处理。

历史在Mac OS X 10.2操作系统上的Macromedia Flash MX专业版里,这些代码可以创建一个与MAC OS X启动过程中看见的类似的动画。

ActionScript第一次以它目前的语法出现是Flash 5版本,这也是第一个完全可对Flash编程的版本。

这个版本被命名为ActionScript1.0。

Flash 6通过增加大量的内置函数和对动画元素更好的编程控制更进一步增强了编程环境的功能。

Flash 7(MX 2004)引进了ActionScript2.0,它增加了强类型(strong typing)和面向对象特征,如显式类声明,继承,接口和严格数据类型。

ActionScript1.0和2.0使用相同的编译形式编译成Flash SWF文件(即Shockwave Flash files,或 'Small Web Format').时间表Flash Player 2:第一个支持脚本的版本,包括控制时间轴的gotoAndPlay, gotoAndStop, nextFrame和nextScene等动作。

视频弹幕外文文献翻译

视频弹幕外文文献翻译

视频弹幕外文文献翻译外文文献翻译原文及译文原文Screen popping method and system for videoBackgroundIn the video, can be a way to comment barrage, barrage is to make a comment on the screen drifting from side to side, when a large number of comments drifting away from the screen, the screen effect is similar to flying shooter barrage , screen of the barrage is mainly in the form of message titles, text messages directly covering the video playback screen, the location and time can be freely set users on the network. There may be a lot of subtitles with the video film is superimposed as a result of the barrage when, in this way affect the viewing of video effects, the impact of the prior art way video viewing screen shot of the issue, has yet to come up with effective solutions. DISCLOSURE The present invention provides a method and a barrage of video systems to at least solve the prior art screen shot way affect the video viewing problems.To achieve the above object, according to one aspect of the invention, there is provided a method of video barrage.According to the present invention, the video barrage method comprising: obtaining a first screen video program schedule; obtaininguser to the first screen in the video program Review; generating a barrage of commentdata corresponding to the content; and The second screen shows an image data corresponding barrage.Further, users get the first screen video program reviews include: Get the user to the first screen in the video program Review and comment period, the second screen shows an image corresponding to the barrage of data comprising: The second screen synchronized with the corresponding image data barrage.Further, the simultaneous display on the second screen and the barrage of image data corresponding to include: video generation timetable timeline; acquisition time point corresponding to the comment period on the timeline; and a control point and a second screen at a time image data corresponding to the barrage.Further, users get the first screen video program reviews include: Get on the social networking site related comments and video program content, the second screen shows an image corresponding to the barrage of data comprising: displaying a second screen barrage data corresponding image; and the second screen displays and video program content corresponding to the comment on the social networking site images.Further, access to social networking sites associated with the video program reviews include: preset selection policy to select Review,comments on the second screen image corresponding to the contents include: the second screen shows the selected Review corresponding image.[Further, after the second screen image corresponding to a barrageof data, said method further comprising: the barrage of data corresponding to the image to the social networking site.Further, users get the first screen video program reviews include:Get the user to the first screen in the video program Review; and obtaining review content corresponding user's identity information, displayed on the second screen play screen data corresponding to an image comprising: displaying barrage in the second image data corresponding to the screen; and a user's identity information in the second screen display. To achieve the object, according to anotheraspect of the invention, there is provided a video of the barrage system, which is used in any kind of video present invention provides a methodof performing barrage. According to another aspect of the present invention, there is provided a video of the barrage system. The video barrage system comprising: back-end server, for generating and storing data barrage; the first screen, connected back-end server, screen data corresponding to the image for display; and a second screen connected to the back-end server for display image data corresponding barrage.Further, the above barrage system further comprises: a social networking site server, connect with back-end server, where back-endserver as the default policy on the social networking site to obtain data about the videocontent corresponding to the comment server through social networking sites and users Comments or screenshots barrage images posted to social networking sites to spread.The present invention, since the playback of video data and barrage of data provided in the different screen, so you can screen experience through a barrage of fun, but at normal viewing video on another screen, thus solving art screen shot way affect the video viewing problems, and thus to improve video viewing effect and raise the user experience.Brief DescriptionThe drawings constitute a part of this application to provide a further understanding of the present invention, exemplary embodiments of the present invention and are used to explain the present invention, the present invention does not constitute an undue limitation. In the drawings:FIG. I is a block diagram of a video of an embodiment according to the present invention barrage systems;FIG. 2 is a schematic view of the interface on the second screen; and FIG. 3 is a flow diagram of an embodiment of the video barrage method of the present invention.DETAILED DESCRIPTIONIt should be noted that, without conflict, the present application examples and examples of features can be combined with each other. Below withreference to the accompanying drawings and described in detail with an embodiment of the present invention.The embodiment of the invention there is provided a video of the barrage system, the following examples provided video barrage system of the present invention are described.FIG. I is a block diagram of an example of a video barrage system according to embodiments of the present invention.Figure I shows, the video barrage system 11 includes a back-end server, the first screen 12 and second screen 13.back-end server 11 for generating and storing data barrage. In general, the back-end server 11 may be a computer. The timing associated with the back-end server can record and store 11 user-published content, and implement video content and users of content. Example embodiments of the present invention includes a video forms of video network video or TV show video.The first screen 12 and 11 are connected back-end server, screendata corresponding to the image for display.The first screen is used to play the video, viewing considerations, generally relatively large, usually a TV screen or PC screen. The second screen 11 is connected to the backend server 13, an image corresponding barrage data for display.The second screen is used to play barrage, energy-saving considerationsand other aspects, in general, smaller than the first screen,usually intelligent terminals and other mobile phone or pad, through its built-in application, you can achieve barrage fly screen display content. In this embodiment, since the playback of video data and barrage of data provided in the different screen, so you can screen experience through a barrage of fun, but at normal viewing video on another screen, thereby achieve improved video viewing effect and raise the user experience. In order to increase the entertainment and interactive barrage system, the system can also include social web server, a social networking site server and back-end server connection, the interface according to preset policies on social networking sites get through social networking sites on the first server back-end server On one screen of video content corresponding comment, and a new generation of user comments or screenshots barrage of images posted to social networking sites to spread.In some special circumstances, may be the first screen and thesecond screen integrated into a screen, that is, an area of the screenis the first screen, the other area is the second screen.FIG. 2 is a schematic view of the interface on the second screen. 2, includes the following components:barrage of screen background skin 201 can be set to change color or background image, the operator can also set in this region and in the broadcast video-related advertising picture as a background. 202 is a hidden toolbar toolbar, when the user touches the screen, the toolbar appears, click on the 201 area again, the toolbar is hidden. Toolbar button 203 is provided with a release, screenshots sharing button 204and the switching button 205.When the barrage content users publish button 203 is pressed, this button is released through text, voice, pictures and short video content. Short video is a video server automatically sliced according to the timeline generated tens of seconds of video files.Alternatively synchronized to the social networking site users. pressing screenshots sharing button 204, the processor automatically on the second screen of the second panel being barrage screen capture content generated image, shared by the user to the respective social networking sites.switching button 205 for grafting barrage of forms, including theform of horizontal scrolling marquee and vertical list is updated form, bubbling animation and so on.barrage content 206 may be a text message, voice message, a short video message or picture message, further, it may also include a user ID or user avatar. If users click on the barrage, you can see the detailsof the publisher, and thus can be other social events (reply, greeting, add friends, gifts, etc.).The present invention further provides a video of the barrage, the methodcan be performed based on the video barrage system.FIG. 3 is a flow diagram of an embodiment of the video barrage method of the present invention.As shown in Figure 3, the video barrage method comprising the steps of S302 to step S308.step S302, the acquisition schedule of the first screen in the video program.by getting the schedule of the first screen of the video program, to ensure that the video program and user comments withStop / J / Ostep S304, users get the first screen video program Review. Content can be uploaded by users, it can be downloaded from the Internet, in particular, can be sent through a mobile terminal user, it can also be found on social sites crawled by certain strategy.In order to ensure the synchronization of video programming and comment in getting users to first screen video program Review, you can also get the comment period corresponding comments, the comment period to ensure that in a subsequent step video program and Exposition corresponding content.Specifically, the synchronization method may comprise the steps of: First, generate timetable timeline. Since the video program iscontinuous, so you can make schedule timeline each time point corresponds to theirvideo. [0052] Then, the acquisition time point comment on thetimeline corresponding time. Since the barrage of data in time is discrete, so you can get each time point barrage data appears.Finally, the control point of the second screen in time with the barrage of data corresponding to the image. For example barrage data corresponding time is I minute and 30 seconds, then, when the video player when I minute and 30 seconds, barrage data corresponding image is displayed on the second screen, you can achieve synchronous video and barrage. In order to further improve the barrage of interactivity can also be content on social networking sites with content related comments that crawl through the various social networking sites and video-related content and comments as barrage content. Specifically, in step S304, you can get on a social networking site-related comments and video program content, for example, to determine whether the comment and video content based on reviews whether a particular site, a particular region, a particular time or contains a specific keyword program-related.Further, in access to social networking sites associated with the video program review content, you can also select the policy according to some content of these comments are selected so as not to cover each screen shot when the screen too. Choose a variety of strategies, such as user level according to the level of choice, according to region selection or random selection.On the other hand, it may be a barrage of data corresponding to the image to the social networking site. Through interaction between the above social networking sites, increasing the fun and interactive barrage. In order to have a better interaction between the user, in this step, you can still get review content corresponding to the user's identity information, such as user avatar and personal information. This information can be acquired for the user to call in a subsequent step.step S306, the generated content and comments barrage corresponding data.The content of the comments set some properties or add picture effects, you can generate a barrage of data.step S308, the display image data corresponding barrage in the second screen.When the aforementioned steps also get a barrage of information corresponding to the user's identity, the step S308 can also display more than one identity. For example, the information is displayed directly on the barrage, or to display the identity information upon receipt of the clicks, and viewing requests otherwise. The current user can further identity information corresponding to the user socialize and interact, for example, add friends, reply, say hello, send virtual gifts and so on. If the foregoing step of acquiring a social networking site-related comments and video program content, then in step S308, the displayimage data corresponding barrage in the second screen can also correspond to the content of the comments on the second screen display images.From the above description, it can be seen that the present embodiment of the invention changes the traditional barrage of messages and video on the same screen overlay, which allows the audience watching the video from overlapping interference.It should be noted that the steps illustrated in the drawing process can be performed in a set of computer-executable instructions, such as computer systems, and, although the flow chart shown in logical order, but in some case, the order may be different from that here steps illustrated or described.Obviously, those skilled in the art should understand that each module or each step of the present invention described above can be used to achieve universal computing device, they can focus on a single computing device or distributed across multiple computing devices network composed, alternatively, they may be implemented in program code executable by a computing device, so that they can be executed by a computing device stored in the storage means, or they are made into respective integrated circuit modules, or They will be produced in a plurality of modules or steps into a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware andsoftware. [0066] The foregoing is only preferred embodiments of the present invention but not to limit the invention, for those skilled in the art, the present invention can have various modifications and changes. Any modification within the spirit and principles of the present invention, made, equivalent replacement, or improvement should be included within the scope of the present invention. ?译文视频的弹幕方法及弹幕系统背景技术在视频中,可以以弹幕的方式发表评论,弹幕就是使评论在屏幕上从一端飘向另一端,当大量评论从屏幕飘过时,画面效果类似于飞行射击游戏中的弹幕,视屏中的弹幕主要是字幕形式的留言,留言的文字直接覆盖在影片播放的屏幕上,位置及时间可以网络中的用户自由设置。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

文献信息:文献标题:Efficient Video Editing for Mobile Applications(高效视频编辑移动应用程序)国外作者:I Vegas,A Agrawal,T Tian文献出处:《International Journal of Advanced Computer Science and Applications》,2017,8(1):26-30字数统计:英文2331单词,11235字符;中文3722汉字外文文献:Efficient Video Editing for Mobile Applications Abstract Recording, storing and sharing video content has become one of the most popular usages of smartphones. This has resulted in demand for video editing apps that the users can use to edit their videos before sharing on various social networks. This study describes a technique to create a video editing application that uses the processing power of both GPU and CPU to process various editing tasks. The results and subsequent discussion shows that using the processing power of both the GPU and CPU in the video editing process makes the application much more time-efficient and responsive as compared to just the CPU-based processing.Keywords—iOS programming; Image processing; GPU; CPU; Objective-C; GPUImage; OpenGLI.INTRODUCTIONSmartphones have become an essential part of our day-to- day life. Americans spend about one hour a day on their smartphones using mobile applications. The iPhone is the most used device, occupying 47% of the smartphone market share.We consume different types of content on our smartphones such as news, social-media, images, video games, music, films, TV shows, etc. Especially, the number of video content distributed around the Internet is growing exponentiallyevery year due to popular video hosting platforms like YouTube, Facebook, Snapchat and Instagram. The consumption of video in mobile platforms is expected to grow 67% year-on-year until 2019 as can be seen in Fig 1.Fig. 1. Evolution of Mobile video consumedAs a result of the high quality camera in iPhones, we can record video in high quality with a device that is always in our pocket. The videos can then be shared with our friends across different social-media platforms. With more and more videos being recorded and shared, it has become important for the users to be able to edit those videos before being published onthe Internet. Video editing is the process of manipulating video images, adding audio and/ or visual effects. Since smartphones are getting more and more powerful with each passing day in terms of processing and memory, it is possible to build iPhone applications to edit videos that the users record, without the need of a computer and with a better and faster user experience.This paper presents a study on developing a video editing application for iOS platform. The application uses image processing algorithms and iOS programming techniques. Image processing is the processing of images using mathematical operations by using any form of signal processing for which the input is an image, a series of images, or a video and the output may be either an image or a set of characteristics or parameters related to the image. iOS programming techniques use a set of libraries, algorithms and best practices that are used to build iPhone applications.This application allows the user to record a video or to import a video stored in your iPhone camera roll. The user can select a specific part of the video and crop the video if it is required. The user can then add some image filter effects along with a background song. Finally, the user can save the resulted video back to the iPhone.II.METHODSA.Technologies usedThe application is programmed in iOS version 9.0. iOS version 9.0 runs in 80% of the iOS devices using xCode version 7.3 and Objective-C as language development. Recently, Apple launched a new programming language for iOS called Swift. This application however is programmed in Objective-C instead of Swift since Objective-C is a more evolved language with more documentation about video processing than Swift.B.Libraries usedFor the entire iOS application flow and user interface, we have used the Cocoa Framework, a group of native libraries provided by Apple to create the user interface of an application.The video capture, video importing/exporting and video cropping, is implemented using UIImagePickerController. This is a class created by Apple to work with media files.The video filter processing is created using GPUImage, a third-party library created by Brad Larson. This library gives you the opportunity to use the GPU to process the video instead of CPU. The video processing tools provided by Appleonly allows to process video using CPU. Also, using GPUImage you can use predefined filters or you can create filters of your own.To preview the video, the application uses Core Image, an iOS native library that allows you to reproduce media content in your application.A VFoundation is used to add custom background audio to the videos. This is a native iOS library provided by Apple to manipulate audio in media files.C.ViewsIn iOS, when we talk about a view, we are referring to a screen in the application. Our application has four different views, as discussed below.The first view allows the user to select a video for editing. The user can select between recording a video using the iphone camera and importing a video from the iPhone camera roll. The user can also select certain parts of the video to be processed, and delete the rest of the video. The new video segment, thus created, is saved in a temporary directory inside the application.Once the video is selected for editing, the filter screen appears. This view provides a preview of the video where the user can select a filter to apply. There is an option to keep the video as it is without applying any filters. When a filter is selected, the application sends the video to the GPU. This means that the CPU is not processing the video, as the GPU works as a separate thread. While the video is being processed, a loading icon is displayed. When the process is complete, the processed video can be viewed with the filter applied. If the user does not like the applied filter, they can select another filter and the above process will be repeated. When the video has been processed, it remains in the temporary directory.The third view is the audio view. This view shows a classical iOS TableViewwith a list of all the available songs that can be chosen for the video. The song files are stored with the application, as the application only offers afew songs and the durations are not longer than twenty seconds. When the user selects a song, the video is processed again. The processing uses the CPU by creating a parallel thread, so now the application continues to run in the main thread. The user also has the option to not add any song to the video. The video is again saved in the temporary directory after an audio song has been added to the video.The fourth view offers a final preview of the video with the new audio included. Here, the user has the option to save the video to the camera roll. Note that, so far, the video is only stored in a temporary folder. This is being done to prevent unnecessary use of memory space and CPU as it is more efficient to work with a file stored in a temporary directory inside the application space.D.FiltersGPUImage works on top of OpenGL shaders. OpenGL Shaders are programs designed to run on some stage of a graphic processor (GPU). As a result, our application canprocess videos using GPU and also use predefined image filters or create a custom filter using OpenGL features.As mentioned earlier, when the application starts processing the video, the CPU creates a parallel thread. This parallel thread is then processed by the GPU as shown in Fig.2.The GPU reads every frame of the video and processes each frame separately. When all the frames are processed, the GPU returns the control back to the CPU.Fig. 2. GPU and CPU state while processing a videoThe process that OpenGL Shaders use to process an image is called rendering pipeline. The OpenGL rendering pipeline defines a number of stages for this shaders as shown in Fig. 3.Fig. 3. States of the rendering pipelineThe vertex shader transforms the video images into individual vertices. Primitive assembly connects the vertices created by the vertex shader to form figures which are called primitive units. Rasterization is then applied, which transforms the primitive units into smaller units called fragments. In the fragment processing stage, colors and textures are applied to the fragments, which is then saved in a Frame Buffer. The frame buffer allows us to create an image or show the image on a screen. The key advantage of using OpenGL Shaders is that the various operations can be run in parallel in the GPU allowing for a more responsive application.III.RESULTSFig. 4 displays the first view of the application. In this view, you can select two options; Record a video using the iPhone camera or import a video from the camera roll. Fig. 5 shows the view where the user can crop the video. Fig. 6 shows the view where a filter can be applied to the video. The application currently provides 15 popular filters, as shown in Table I.Fig. 4. First app view with two available optionsFig. 5. Second app view to crop the videoFig. 6. Third app view to apply filtersTABLE I. F ILTERS A V AILABLEFig. 7 provides a view where the user can choose an audio song that will be added to the video. Currently, the application provides 10 audio songs. These songs have been downloaded from , which are under Creative Commonslicense. The last view, as shown in Fig. 8, provides a preview of the processed video and gives user the option to save the video on the camera roll.Fig. 7.Fourth app view to select an audio songFig. 8.Fifth app view to save the edited videoIV.DISCUSSIONing GPU or CPU for image processingIn the methods section, we mentioned about using GPU processing alongside the CPU processing for many of the processing tasks. For parallel operations like video processing, using GPU has significant performance advantages over CPU. The GPUImage framework takes only 2.5 ms on an iPhone 4 to upload a frame from the camera, apply a gamma filter, and display, versus 106 ms for the same operation using Core Image and 460 ms using CPU-based processing. This makes GPUImage 40X faster than Core Image and 184X faster than CPU-based processing. On an iPhone 4S, GPUImage is 4X faster than Core Image for this case, and 102X faster than CPU-based processing.CoreImage is the library provided by Apple to process images and video files. In newer devices like the iPhone 6, we can achieve the same performance using CPU or GPU. However, for this study we decided to use GPU processing because older devices like the iPhone 4 and iPhone 5 are more responsive when we utilize both GPU and CPU for video editing tasks.B.Duration of the videosSeveral social networks such as Instagram and Snapchat limit the length of videos that can be uploaded to 10 or 15 seconds. When a user uses a mobile application, they want a fast, responsive and a seamless user experience, and processing a video longer than 20 seconds can take a longer time thus negatively impacting the user experience. So, we decided to limit the duration of the videos that the users can take using the application to 20 seconds. Table II shows the video processing time using the application with different video durations. All videos are in 1080p with 30 frames per second. For this experiment, the blur effect was applied using an iPhone 6.TABLE II.1080P VIDEO PROCESSING TIME ON AN IPHONE 6Table III shows the video processing time using the application for videos of different durations in 640p (unlike videos in Table II that are in 1080p).TABLE III.640P VIDEO PROCESSING TIME ON AN IPHONE 6Table IV shows the video processing to apply blur effect using the application on iPhone 5s. All videos are in 1080p with 30 frames per second. As the data shows, the processing time required by an iPhone 5s is almost the double of the time required by iPhone 6 as shown in Table II. This is as a result of the iPhone 5s GPU being half as powerful as the iPhone 6 GPU.TABLE IV.1080P VIDEO PROCESSING TIME ON AN IPHONE 5SC.Future WorkFuture work will involve improving the scalability of the application. For instance, we will have the songs list stored on the server. The application will be able to connect to the server and the songs can be downloaded to the smartphone. Another new feature will involve adding the option to select between different image qualities for the output video. With lower quality export videos, the processing will be faster as compared to a video generated using the high quality option.V.CONCLUSIONMobile applications and video content have become an integral part of our lives. It has become common for people to use their mobile phones to record, edit and share videos on social networking sites. This paper presents a video editing application for iOS devices that can be used to record videos and edit them. The edit features include cropping, applying filters or adding background audio. The application describes a technique to use the processing power of both the GPU and the CPU to improve the response time. The results show that using the processing power of the GPU alongside CPU in the video editing process makes the application more efficient and responsive.中文译文:高效视频编辑移动应用程序摘要录制、存储和共享视频内容已成为智能手机最受欢迎的应用之一。

相关文档
最新文档