Настройки

Укажите год
-

Небесная энциклопедия

Космические корабли и станции, автоматические КА и методы их проектирования, бортовые комплексы управления, системы и средства жизнеобеспечения, особенности технологии производства ракетно-космических систем

Подробнее
-

Мониторинг СМИ

Мониторинг СМИ и социальных сетей. Сканирование интернета, новостных сайтов, специализированных контентных площадок на базе мессенджеров. Гибкие настройки фильтров и первоначальных источников.

Подробнее

Форма поиска

Поддерживает ввод нескольких поисковых фраз (по одной на строку). При поиске обеспечивает поддержку морфологии русского и английского языка
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Укажите год
Укажите год

Применить Всего найдено 271. Отображено 136.
12-09-2017 дата публикации

Content adaptive prediction and entropy coding of motion vectors for next generation video

Номер: US0009762911B2
Принадлежит: Intel Corporation, INTEL CORP

Techniques related to content adaptive prediction and entropy coding of motion vectors are described.

Подробнее
28-03-2017 дата публикации

Content adaptive entropy coding of modes and reference types data for next generation video

Номер: US0009609330B2
Принадлежит: Intel Corporation, INTEL CORP

Techniques related to content adaptive entropy coding of modes and reference types data are described.

Подробнее
01-06-2017 дата публикации

EFFICIENT AND SCALABLE INTRA VIDEO/IMAGE CODING USING WAVELETS AND AVC, MODIFIED AVC, VPx, MODIFIED VPx, OR MODIFIED HEVC CODING

Номер: US20170155906A1
Автор: Atul Puri, PURI ATUL, Puri Atul
Принадлежит:

Techniques related to intra video frame or image coding using wavelets and (Advanced Video Coding) AVC, modified AVC, VPx, modified VPx, or modified High Efficiency Video Coding (HEVC) are discussed. Such techniques may include wavelet decomposition of a frame or image to generate subbands and coding the subbands using compliant and/or modified coding techniques. 1. A computer implemented method for image or video coding comprising:performing wavelet decomposition of an image or frame to generate a plurality of subbands;encoding each of the plurality of subbands with an Advanced Video Coding (AVC) compliant encoder to generate a plurality of AVC compliant bitstreams each corresponding to a subband of the plurality of subbands; andmultiplexing the plurality of subbands to generate a scalable bitstream.2. The method of further comprising:selecting a wavelet analysis filter set for performing the wavelet decomposition.3. The method of claim 1 , wherein the image or frame has a bit depth of 8 bits and each of the subbands has a bit depth of 9 bits.4. The method of claim 1 , wherein the AVC compliant encoder comprises a 10 bit intra profile encoder.5. The method of claim 1 , wherein performing the wavelet decomposition comprises single level wavelet analysis filtering and the plurality of subbands comprise four subbands.6. The method of claim 5 , wherein the plurality of subbands comprise an LL subband claim 5 , an LH subband claim 5 , an HL subband claim 5 , and an HH subband.7. The method of claim 1 , wherein performing the wavelet decomposition comprises multiple level wavelet analysis filtering.8. The method of claim 7 , wherein the plurality of subbands comprise seven subbands.9. At least one machine readable medium comprising a plurality of instructions that claim 7 , in response to being executed on a device claim 7 , cause the device to perform image or video coding by:performing wavelet decomposition of an image or frame to generate a plurality of subbands; ...

Подробнее
25-04-2017 дата публикации

Method and apparatus to prioritize video information during coding and decoding

Номер: US0009635376B2

A method and apparatus prioritizing video information during coding and decoding. Video information is received and an element of the video information, such as a visual object, video object layer, video object plane or keyregion, is identified. A priority is assigned to the identified element and the video information is encoded into a bitstream, such as a visual bitstream encoded using the MPEG-4 standard, including an indication of the priority of the element. The priority information can then be used when decoding the bitstream to reconstruct the video information ...

Подробнее
10-10-2017 дата публикации

Content adaptive parametric transforms for coding for next generation video

Номер: US0009787990B2
Принадлежит: Intel Corporation, INTEL CORP

Techniques related to content adaptive parametric transforms for coding video are described.

Подробнее
27-09-2016 дата публикации

Method of content adaptive video encoding

Номер: US0009456208B2

A method of content adaptive encoding video comprising segmenting video content into segments based on predefined classifications or models. Based on the segment classifications, each segment is encoded with a different encoder chosen from a plurality of encoders. Each encoder is associated with a model. The chosen encoder is particularly suited to encoding the unique subject matter of the segment. The coded bit-stream for each segment includes information regarding which encoder was used to encode that segment. A matching decoder of a plurality of decoders is chosen using the information in the coded bitstream to decode each segment using a decoder suited for the classification or model of the segment. If scenes exist which do not fall in a predefined classification, or where classification is more difficult based on the scene content, these scenes are segmented, coded and decoded using a generic coder and decoder.

Подробнее
06-09-2016 дата публикации

Advanced watermarking system and method

Номер: US0009437201B2
Принадлежит: Intel Corporation, INTEL CORP

A method, computer program product, and computing device for obtaining an uncompressed digital media data file. One or more default watermarks is inserted into the uncompressed digital media data file to form a watermarked uncompressed digital media data file. The watermarked uncompressed digital media data file is compressed to form a first watermarked compressed digital media data file. The first watermarked compressed media data file is stored on a storage device. The first watermarked compressed media data file is retrieved from the storage device. The first watermarked compressed digital media data file is modified to associate the first watermarked compressed digital media data file with a transaction identifier to form a second watermarked compressed digital media data file.

Подробнее
06-03-2018 дата публикации

Content adaptive motion compensated precision prediction for next generation video coding

Номер: US0009912958B2
Принадлежит: Intel Corporation, INTEL CORP

Techniques related to adaptive precision and filtering motion compensation for video coding may include, for example, determining, via a motion compensated filtering predictor module, a motion compensation prediction precision associated with at least a portion of a current picture being decoded, where the motion compensation prediction precision comprises at least one of a quarter pel precision or an eighth pel precision. Predicted pixel data of a predicted partition associated with a prediction partition of the current picture may be generated, via the motion compensated filtering predictor module, by filtering a portion of a decoded reference picture based at least in part on the motion compensation prediction precision. Prediction partitioning indicators associated with the prediction partition and a motion vector indicating a positional difference between the prediction partition and an associated partition of the decoded reference picture may be coded, via an entropy encoder, into ...

Подробнее
05-10-2017 дата публикации

SYSTEMS AND METHODS FOR VIDEO/MULTIMEDIA RENDERING, COMPOSITION, AND USER-INTERACTIVITY

Номер: US20170287525A1
Принадлежит: INTEL CORPORATION

An interactive video/multimedia application (IVM application) may specify one or more media assets for playback. The IVM application may define the rendering, composition, and interactivity of one or more the assets, such as video. Video multimedia application data (IVMA data may) be used to define the behavior of the IVM application. The IVMA data may be embodied as a standalone file in a text or binary, compressed format. Alternatively, the IVMA data may be embedded within other media content. A video asset used in the IVM application may include embedded, content-aware metadata that is tightly coupled to the asset. The IVM application may reference the content-aware metadata embedded within the asset to define the rendering and composition of application display elements and user-interactivity features. The interactive video/multimedia application (defined by the video and multimedia application data) may be presented to a viewer in a player application. 1. An apparatus comprising:processing logic to automatically detect an object and its location in a plurality of frames of video while the video is playing, and to associate content with the detected object using metadata, wherein the metadata associates the location of the object in the frames to content to provide for metadata-to-content synchronization for said frames, and upon rendering of the frames, said processing logic to add the content to the frames, based on the metadata, at the locations in the frames, in association with the detected object; andmemory coupled to the processing logic, the memory to store the frames.2. The apparatus of claim 1 , said processing logic to automatically detect an object by detecting at least one of a shape or texture.3. The apparatus of claim 1 , wherein the metadata comprises at least one of a region of interest (ROI) bounding the detected object claim 1 , and a location of the detected object in the frame.4. The apparatus of claim 1 , said processing logic to track ...

Подробнее
21-03-2017 дата публикации

System and method of filtering noise

Номер: US0009602699B2

A system and method of removing noise in a bitstream is disclosed. Based on segment classifications of a bitstream, each segment or portion is encoded with a different encoder associated with the portion model and chosen from a plurality of encoders. The coded bitstream for each segment includes information regarding which encoder was used to encode that segment. A circuit for removing noise in video content includes a first filter connected to a first input switch and a first output switch, the first filter being in parallel with a first pass-through line, a second filter connected to a second input switch and a second output switch, the second filter connected in parallel with a second pass-through line, and a third filter connected to a third input switch in a third output switch.

Подробнее
18-10-2016 дата публикации

Methods and apparatus for integrating external applications into an MPEG-4 scene

Номер: US0009473770B2

A method of decoding, composing and rendering a scene. First information is obtained, the first information including a part of a MPEG-4 BIFS scene description stream and at least one coded MPEG-4 media stream. The first information is decoded by invoking a BIFS scene decoder and one or more specific media decoders that are required by the scene. Second information is obtained, the second information including a second part of a BIFS scene description stream that contains a reference to an external application. The second information is decoded by invoking the BIFS scene decoder and an external application decoder. An integrated scene is composed, the integrated scene including one or more decoded MPEG-4 media objects and one or more external application objects specified in the decoded scene descriptions streams. The composed integrated scene is rendered on a display.

Подробнее
12-10-2017 дата публикации

VIDEO CODER PROVIDING IMPLICIT COEFFICIENT PREDICTION AND SCAN ADAPTATION FOR IMAGE CODING AND INTRA CODING OF VIDEO

Номер: US20170295367A1
Принадлежит:

A predictive video coder performs gradient prediction based on previous blocks of image data. For a new block of image data, the prediction determines a horizontal gradient and a vertical gradient from a block diagonally above the new block (vertically above a previous horizontally adjacent block). Based on these gradients, the encoder predicts image information based on image information of either the horizontally adjacent block or a block vertically adjacent to the new block. The encoder determines a residual that is transmitted in an output bitstream. The decoder performs the identical gradient prediction and predicts image information without need for overhead information. The decoder computes the actual information based on the predicted information and the residual from the bitstream. 1. A method comprising:receiving, by a processor, a plurality of decoded blocks of image data adjacent to a current block X of image data, the plurality of decoded blocks of image data being blocks in a row above or in a column to the left of the current block X of image data;receiving, by the processor, a first parameter associated with one of a plurality of prediction modes;generating, by the processor, a second parameter associated with a direction of prediction; anddecoding, by the processor, the current block X of image data predicted from one of the plurality of decoded adjacent blocks of image data according to the direction associated with the second parameter.2. The method of claim 1 , wherein the first parameter comprises a flag.3. The method of claim 2 , wherein the flag indicates that the one of the plurality of prediction modes comprises predicting the current block X from one of the decoded blocks adjacent to the current block X.4. The method of claim 3 , wherein the current block X comprises an 8 pixel by 8 pixel array.5. The method of claim 1 , wherein the plurality of decoded blocks is inside a boundary of a video object plane containing the current block X.6. A ...

Подробнее
22-09-2016 дата публикации

CONTENT ADAPTIVE ENTROPY CODING OF CODED/NOT-CODED DATA FOR NEXT GENERATION VIDEO

Номер: US20160277738A1
Принадлежит: Intel Corporation

Techniques related to content adaptive entropy coding of coded/not-coded data are described. 125.-. (canceled)26. A computer-implemented method for video coding , comprising:determining a selected entropy coding technique for coded/not-coded video data from a plurality of entropy coding techniques, wherein the plurality of entropy coding techniques comprise a proxy variable length coding technique and a symbol-run coding technique;entropy encoding a processed bitstream associated with the coded/not-coded video data using the selected entropy coding technique to generate encoded coded/not-coded video data; andassembling the encoded coded/not-coded video data into an output bitstream.27. The method of claim 26 , wherein the processed bitstream associated with the coded/not-coded video data comprises at least one of a pass-through of the coded/not-coded video data claim 26 , a reverse of the coded/not-coded video data claim 26 , a bit inversion of the coded/not-coded video data claim 26 , or a bit difference of the coded/not-coded video data.28. The method of claim 26 , wherein the processed bitstream associated with the coded/not-coded video data comprises at least one of a 1-dimensional raster scan based on the coded/not-coded video data claim 26 , a 1-dimensional block-based scan based on the coded/not-coded video data claim 26 , or a 1-dimensional tile-based scan based on the coded/not-coded video data.29. The method of claim 26 , wherein the proxy variable length coding technique is based on a first proxy variable length coding table and wherein the plurality of entropy coding techniques comprise a second proxy variable length coding technique based on a second proxy variable length coding table.30. The method of claim 26 , wherein the coded/not-coded data is associated with a P-picture of video data claim 26 , and wherein determining the selected entropy coding technique comprises:generating a 1-dimensional raster scan based on the coded/not-coded video data; ...

Подробнее
24-10-2017 дата публикации

Content adaptive quality restoration filtering for next generation video coding

Номер: US0009800899B2
Принадлежит: Intel Corporation, INTEL CORP

Techniques related to quality restoration filtering for video coding are described.

Подробнее
02-02-2017 дата публикации

METHODS AND APPARATUS FOR INTEGRATING EXTERNAL APPLICATIONS INTO AN MPEG-4 SCENE

Номер: US20170034535A1
Принадлежит:

A method of decoding, composing and rendering a scene. First information is obtained, the first information including a part of a MPEG-4 BIFS scene description stream and at least one coded MPEG-4 media stream. The first information is decoded by invoking a BIFS scene decoder and one or more specific media decoders that are required by the scene. Second information is obtained, the second information including a second part of a BIFS scene description stream that contains a reference to an external application. The second information is decoded by invoking the BIFS scene decoder and an external application decoder. An integrated scene is composed, the integrated scene including one or more decoded MPEG-4 media objects and one or more external application objects specified in the decoded scene descriptions streams. The composed integrated scene is rendered on a display. 1. A system comprising:a processor; and obtaining first information comprising a part of a scene description stream and a coded media stream;', 'decoding the first information using a scene decoder and a specific application decoder associated with a scene description to yield a decoded media object;', 'obtaining second information comprising a second part of the scene description stream that contains a reference to an external application, wherein the reference to the external application identifies a location the external application;', 'decoding the second information using the scene decoder and the external application to yield an external application object;', 'composing an integrated scene comprising the decoded media object and the external application object to yield a composed integrated scene; and', 'rendering the composed integrated scene., 'a computer-readable storage device storing instructions which, when executed by the processor, cause the processor to perform operations comprising2. The system of claim 1 , wherein the scene description stream comprises a BIFS scene description stream. ...

Подробнее
06-03-2018 дата публикации

Content adaptive impairments compensation filtering for high efficiency video coding

Номер: US0009912947B2
Принадлежит: Intel Corporation, INTEL CORP

A system and method for quality restoration filtering is described that can be used either in conjunction with video coding, or standalone for postprocessing. It uses wiener filtering approach in conjunction with an efficient codebook representation.

Подробнее
27-06-2017 дата публикации

Video coder providing implicit coefficient prediction and scan adaptation for image coding and intra coding of video

Номер: US0009693051B2

A predictive video coder performs gradient prediction based on previous blocks of image data. For a new block of image data, the prediction determines a horizontal gradient and a vertical gradient from a block diagonally above the new block (vertically above a previous horizontally adjacent block). Based on these gradients, the encoder predicts image information based on image information of either the horizontally adjacent block or a block vertically adjacent to the new block. The encoder determines a residual that is transmitted in an output bitstream. The decoder performs the identical gradient prediction and predicts image information without need for overhead information. The decoder computes the actual information based on the predicted information and the residual from the bitstream.

Подробнее
23-01-2013 дата публикации

Method and apparatus for variable accuracy inter-picture timing specification for digital video encoding

Номер: CN102892006A
Принадлежит:

A method and apparatus for variable accuracy inter-picture timing specification for digital video encoding is disclosed. Specifically, the present invention discloses a system that allows the relative timing of nearby video pictures to be encoded in a very efficient manner. In one embodiment, the display time difference between a current video picture and a nearby video picture is determined. The display time difference is then encoded (180) into a digital representation of the video picture. In a preferred embodiment, the nearby video picture is the most recently transmitted stored picture. For coding efficiency, the display time difference may be encoded using a variable length coding system or arithmetic coding. In an alternate embodiment, the display time difference is encoded as a power of two to reduce the number of bits transmitted.

Подробнее
12-09-2017 дата публикации

Content adaptive, characteristics compensated prediction for next generation video

Номер: US0009762929B2

Techniques related to content adaptive, characteristics compensated prediction for video coding are described.

Подробнее
04-10-2016 дата публикации

Content adaptive quality restoration filtering for high efficiency video coding

Номер: US0009462280B2

A system and method for impairments compensation filtering is described that can be used either in conjunction with video coding, or standalone for postprocessing. It uses a wiener filtering approach in conjunction with an efficient codebook representation.

Подробнее
09-01-2018 дата публикации

Method of content adaptive video encoding

Номер: US0009866845B2

A method of content adaptive encoding video comprising segmenting video content into segments based on predefined classifications or models. Based on the segment classifications, each segment is encoded with a different encoder chosen from a plurality of encoders. Each encoder is associated with a model. The chosen encoder is particularly suited to encoding the unique subject matter of the segment. The coded bit-stream for each segment includes information regarding which encoder was used to encode that segment. A matching decoder of a plurality of decoders is chosen using the information in the coded bitstream to decode each segment using a decoder suited for the classification or model of the segment. If scenes exist which do not fall in a predefined classification, or where classification is more difficult based on the scene content, these scenes are segmented, coded and decoded using a generic coder and decoder.

Подробнее
20-06-2017 дата публикации

Content adaptive entropy coding of partitions data for next generation video

Номер: US0009686551B2
Принадлежит: Intel Corporation, INTEL CORP

Techniques related to content adaptive entropy coding of partitions data are described.

Подробнее
16-08-2016 дата публикации

Generalized scalability for video coder based on video objects

Номер: US0009420309B2

A video coding system that codes video objects as scalable video object layers. Data of each video object may be segregated in to one or more layers. A base layer contains sufficient information to decode a basic representation of the video object. Enhancement layers contain supplementary data regarding the video object that, if decoded, enhance the basic representation obtained from the base layer. The present invention thus provides a coding scheme suitable for use with decoders of varying processing power. A simple decoder may decode only the base layer of the video objects to obtain the basic representation. However, more powerful decoders may decode the base layer data of video objects and additional enhancement layer data to obtain improved decoded output. The coding scheme supports enhancement of both the spatial resolution and the temporal resolution of video object.

Подробнее
24-01-2017 дата публикации

Method and apparatus for variable accuracy inter-picture timing specification for digital video encoding with reduced requirements for division operations

Номер: US0009554151B2
Принадлежит: APPLE INC., APPLE INC, Apple Inc.

A method and apparatus for performing motion estimation in a digital video system is disclosed. Specifically, the present invention discloses a system that quickly calculates estimated motion vectors in a very efficient manner. In one embodiment, a first multiplicand is determined by multiplying a first display time difference between a first video picture and a second video picture by a power of two scale value. This step scales up a numerator for a ratio. Next, the system determines a scaled ratio by dividing that scaled numerator by a second first display time difference between said second video picture and a third video picture. The scaled ratio is then stored calculating motion vector estimations. By storing the scaled ratio, all the estimated motion vectors can be calculated quickly with good precision since the scaled ratio saves significant bits and reducing the scale is performed by simple shifts.

Подробнее
02-10-2013 дата публикации

Content adaptive impairments compensation filtering for high efficiency video coding

Номер: CN103339952A
Принадлежит:

A system and method for impairments compensation filtering is described that can be used either in conjunction with video coding, or standalone for postprocessing. It uses a wiener filtering approach in conjunction with an efficient codebook representation.

Подробнее
30-01-2018 дата публикации

Video codec architecture for next generation video

Номер: US0009883198B2
Принадлежит: Intel Corporation, INTEL CORP

Techniques related to video codec architecture for next generation video are described.

Подробнее
23-02-2017 дата публикации

METHOD AND APPARATUS FOR VARIABLE ACCURACY INTER-PICTURE TIMING SPECIFICATION FOR DIGITAL VIDEO ENCODING

Номер: US20170054994A1
Принадлежит:

A method and apparatus for variable accuracy inter-picture timing specification for digital video encoding is disclosed. Specifically, the present invention discloses a system that allows the relative timing of nearby video pictures to be encoded in a very efficient manner. In one embodiment, the display time difference between a current video picture and a nearby video picture is determined. The display time difference is then encoded into a digital representation of the video picture. In a preferred embodiment, the nearby video picture is the most recently transmitted stored picture. For coding efficiency, the display time difference may be encoded using a variable length coding system or arithmetic coding. In an alternate embodiment, the display time difference is encoded as a power of two to reduce the number of bits transmitted. 120-. (canceled)21. A method for decoding a plurality of video pictures , the method comprising:receiving an encoded first video picture, an encoded second video picture, and an integer value that is an exponent of a power of two value, said exponent for decoding a display time difference between the second video picture and the first video picture in a sequence of video pictures; andby a decoder, decoding the second video picture by using the display time difference to compute a motion vector for the second video picture.22. The method of claim 21 , wherein the display time difference represents a display order of the second video picture with reference to the first video picture.23. The method of claim 21 , wherein the encoded integer value claim 21 , the encoded first video picture claim 21 , and the encoded second video picture are stored in a bitstream.24. The method of claim 21 , wherein the display time difference is encoded in a slice header associated with the encoded second video picture.25. The method of claim 21 , wherein the display time difference is used as a display order of the second video picture relative to the first ...

Подробнее
30-03-2017 дата публикации

METHOD AND APPARATUS FOR VARIABLE ACCURACY INTER-PICTURE TIMING SPECIFICATION FOR DIGITAL VIDEO ENCODING WITH REDUCED REQUIREMENTS FOR DIVISION OPERATIONS

Номер: US20170094308A1
Принадлежит:

A method and apparatus for performing motion estimation in a digital video system is disclosed. Specifically, the present invention discloses a system that quickly calculates estimated motion vectors in a very efficient manner. In one embodiment, a first multiplicand is determined by multiplying a first display time difference between a first video picture and a second video picture by a power of two scale value. This step scales up a numerator for a ratio. Next, the system determines a scaled ratio by dividing that scaled numerator by a second first display time difference between said second video picture and a third video picture. The scaled ratio is then stored calculating motion vector estimations. By storing the scaled ratio, all the estimated motion vectors can be calculated quickly with good precision since the scaled ratio saves significant bits and reducing the scale is performed by simple shifts. 120-. (canceled)21. For a stream comprising first , second , and third video pictures , a method comprising:computing a scaling value that is based on (i) a particular power of two value, (ii) a first order difference value between an order value for the third video picture and an order value for the first video picture, and (iii) a second order difference value between an order value for the second video picture and the order value for the first video picture;computing a motion vector associated with the second video picture by bit-shifting a product of the scaling value and a motion vector associated with the third video picture, wherein a number of bits shifted by said bit-shifting is based on said particular power of two value; anddecoding the second video picture by using the computed motion vector.22. The method of claim 21 , wherein an order value for a video picture is representative of a temporal relationship for the video picture with respect to another video picture.23. The method of claim 21 , wherein an order value for a video picture is ...

Подробнее
06-12-2016 дата публикации

Method and apparatus for variable accuracy inter-picture timing specification for digital video encoding

Номер: US0009516337B2
Принадлежит: APPLE INC., APPLE INC, Apple Inc.

A method and apparatus for variable accuracy inter-picture timing specification for digital video encoding is disclosed. Specifically, the present invention discloses a system that allows the relative timing of nearby video pictures to be encoded in a very efficient manner. In one embodiment, the display time difference between a current video picture and a nearby video picture is determined. The display time difference is then encoded into a digital representation of the video picture. In a preferred embodiment, the nearby video picture is the most recently transmitted stored picture. For coding efficiency, the display time difference may be encoded using a variable length coding system or arithmetic coding. In an alternate embodiment, the display time difference is encoded as a power of two to reduce the number of bits transmitted.

Подробнее
13-12-2016 дата публикации

Content adaptive telecine and interlace reverser

Номер: US0009521358B2
Принадлежит: Intel Corporation, INTEL CORP

Techniques related to processing a mixed content video stream to generate progressive video for encoding and/or display are discussed. Such techniques may include determining conversion techniques for various portions of the mixed content video stream and converting the portions based on the determined techniques. The conversion of true interlaced video include content adaptive interlace reversal and the conversion of pseudo-interlaced telecine converted video may include adaptive telecine pattern reversal.

Подробнее
29-11-2016 дата публикации

Multimedia integration description scheme, method and system for MPEG-7

Номер: US0009507779B2

The invention provides a system and method for integrating multimedia descriptions in a way that allows humans, software components or devices to easily identify, represent, manage, retrieve, and categorize the multimedia content. In this manner, a user who may be interested in locating a specific piece of multimedia content from a database, Internet, or broadcast media, for example, may search for and find the multimedia content. In this regard, the invention provides a system and method that receives multimedia content and separates the multimedia content into separate components which are assigned to multimedia categories, such as image, video, audio, synthetic and text. Within each of the multimedia categories, the multimedia content is classified and descriptions of the multimedia content are generated. The descriptions are then formatted, integrated, using a multimedia integration description scheme, and the multimedia integration description is generated for the multimedia content ...

Подробнее
01-06-2017 дата публикации

EFFICIENT INTRA VIDEO/IMAGE CODING USING WAVELETS AND VARIABLE SIZE TRANSFORM CODING

Номер: US20170155905A1
Принадлежит:

Techniques related to intra video frame or image coding using wavelets and variable size transform coding are discussed. Such techniques may include wavelet decomposition of a frame or image to generate subbands and coding partitions of the frame or image or subbands based on variable size transforms. 1. A computer-implemented method for image or video coding comprising:receiving an original image, frame, or block of a frame for intra coding;partitioning the original image, frame, or block into a plurality of transform partitions including at least a square partition and a rectangular partition; andperforming an adaptive parametric transform or an adaptive hybrid parametric transform on at least a first transform partition of the plurality of transform partitions and a discrete cosine transform on at least a second transform partition of the plurality of transform partitions to produce corresponding first and second transform coefficient partitions, wherein the adaptive parametric transform or the adaptive hybrid parametric transform comprises a base matrix derived from decoded pixels neighboring the first transform partition.2. The method of claim 1 , wherein the first transform partition comprises a partition size that is within a small partition size subset of available partition sizes and the second transform partition has a partition size that is within the available partition sizes.3. The method of claim 1 , wherein the first transform partition has a size of 4×4 pixels claim 1 , 8×4 pixels claim 1 , 4×8 pixels claim 1 , or 8×8 pixels.4. The method of claim 1 , wherein the first transform partition has a size not greater than 8×8 pixels and the second transform partition has a size not less than 8×8 pixels.5. The method of claim 1 , further comprising:quantizing the first and second transform coefficient partitions to produce quantized first and second transform coefficient partitions; andscanning and entropy encoding the quantized first and second transform ...

Подробнее
17-10-2017 дата публикации

Content adaptive partitioning for prediction and coding for next generation video

Номер: US0009794569B2

Techniques related to content adaptive partitioning for prediction and coding are described.

Подробнее
23-01-2013 дата публикации

Method and apparatus for variable accuracy inter-picture timing specification for digital video encoding

Номер: CN102892005A
Принадлежит:

A method and apparatus for variable accuracy inter-picture timing specification for digital video encoding is disclosed. Specifically, the present invention discloses a system that allows the relative timing of nearby video pictures to be encoded in a very efficient manner. In one embodiment, the display time difference between a current video picture (105) and a nearby video picture is determined. The display time difference is then encoded (180) into a digital representation of the video picture. In a preferred embodiment, the nearby video picture is the most recently transmitted stored picture. For coding efficiency, the display time difference may be encoded using a variable length coding system or arithmetic coding. In an alternate embodiment, the display time difference is encoded as a power of two to reduce the number of bits transmitted.

Подробнее
20-03-2018 дата публикации

System and method of filtering noise

Номер: US0009924201B2

A system and method of removing noise in a bitstream is disclosed. Based on the segment classifications of a bitstream, each segment or portion is encoded with a different encoder associated with the portion model and chosen from a plurality of encoders. The coded bitstream for each segment includes information regarding which encoder was used to encode that segment. A circuit for removing noise in video content includes a first filter connected to a first input switch and a first output switch, the first filter being in parallel with a first pass-through line, a second filter connected to a second input switch and a second output switch, the second filter connected in parallel with a second pass-through line, and a third filter connected to a third input switch in a third output switch.

Подробнее
05-12-2017 дата публикации

Method and apparatus for variable accuracy inter-picture timing specification for digital video encoding

Номер: US0009838707B2
Принадлежит: APPLE INC., APPLE INC, Apple Inc.

A method and apparatus for variable accuracy inter-picture timing specification for digital video encoding is disclosed. Specifically, the present invention discloses a system that allows the relative timing of nearby video pictures to be encoded in a very efficient manner. In one embodiment, the display time difference between a current video picture and a nearby video picture is determined. The display time difference is then encoded into a digital representation of the video picture. In a preferred embodiment, the nearby video picture is the most recently transmitted stored picture. For coding efficiency, the display time difference may be encoded using a variable length coding system or arithmetic coding. In an alternate embodiment, the display time difference is encoded as a power of two to reduce the number of bits transmitted.

Подробнее
10-04-2018 дата публикации

Content adaptive fusion filtering of prediction signals for next generation video coding

Номер: US0009942572B2
Принадлежит: Intel Corporation, INTEL CORP

Techniques related to fusion improvement filtering of prediction signals for video coding are described.

Подробнее
12-01-2017 дата публикации

METHOD OF CONTENT ADAPTIVE VIDEO ENCODING

Номер: US20170013264A1
Принадлежит:

A method of content adaptive encoding video comprising segmenting video content into segments based on predefined classifications or models. Based on the segment classifications, each segment is encoded with a different encoder chosen from a plurality of encoders. Each encoder is associated with a model. The chosen encoder is particularly suited to encoding the unique subject matter of the segment. The coded bit-stream for each segment includes information regarding which encoder was used to encode that segment. A matching decoder of a plurality of decoders is chosen using the information in the coded bitstream to decode each segment using a decoder suited for the classification or model of the segment. If scenes exist which do not fall in a predefined classification, or where classification is more difficult based on the scene content, these scenes are segmented, coded and decoded using a generic coder and decoder. 1. A method comprising:assigning a different predefined quantization model to each of a portion of a full frame of video content and a remainder portion of the full frame of the video content; andencoding the portion of the full frame differently than the remainder portion based on the different predefined quantization model assigned to each respective portion.2. The method of claim 1 , wherein the portion is rectangular.3. The method of claim 2 , wherein the portion is identified on a frame-by-frame basis.4. The method of claim 1 , wherein the portion is positioned in a top left corner of the full frame.5. The method of claim 1 , wherein the different predefined quantization content model causes the encoding to adaptively quantize a region of interest.6. The method of claim 5 , wherein the region of interest comprises an arbitrary shaped object in the portion of the full frame claim 5 , where the portion is smaller than the full frame.7. The method of claim 5 , wherein region of interest information indicating the region of interest is found in a header ...

Подробнее
22-09-2016 дата публикации

CONTENT ADAPTIVE ENTROPY CODING OF MODES AND REFERENCE TYPES DATA FOR NEXT GENERATION VIDEO

Номер: US20160277739A1
Принадлежит: Intel Corporation

Techniques related to content adaptive entropy coding of modes and reference types data are described. 149.-. (canceled)50. A computer-implemented method for video coding , comprising:loading splits data, horizontal/vertical data, modes data, and reference type data for at least a portion of a video frame;determining a first estimated entropy coding bit cost comprising an entropy coding bit cost for jointly coding the splits data and the modes data and an entropy coding bit cost for coding the horizontal/vertical data;determining a second estimated entropy coding bit cost comprising an entropy coding bit cost for separately coding the splits data and the modes data, and an entropy coding bit cost for coding the horizontal/vertical data;selecting between jointly and separately coding the splits data and the modes data for at least the portion of the video frame based on the lowest of the first estimated entropy coding bit cost and the second estimated entropy coding bit cost;entropy encoding, jointly or separately based on the selected coding, the splits data and the modes data, and entropy encoding the horizontal/vertical data;entropy encoding the reference type data; andoutputting a bitstream comprising the entropy encoded splits data, modes data, horizontal/vertical data, and reference type data.51. The method of claim 50 , wherein the reference type data comprises inter-block reference type data and multi-block reference type data claim 50 , and wherein entropy encoding the reference type data comprises:selecting a variable length coding table for the multi-block reference type data based on a number of multi-reference types in the multi-block reference type data; andencoding the multi-block reference type data based on the variable length coding table.52. The method of claim 50 , wherein the reference type data comprises inter-block reference type data and multi-block reference type data claim 50 , wherein the video frame comprises a P-picture claim 50 , and ...

Подробнее
02-05-2017 дата публикации

Systems and methods for playing, browsing and interacting with MPEG-4 coded audio-visual objects

Номер: US0009641897B2

A number of novel configurations for MPEG-4 playback, browsing and user interaction are disclosed. MPEG-4 playback systems are not simple extensions of MPEG-2 playback systems, but, due to object based nature of MPEG-4, present new opportunities and challenges in synchronized management of independent coded objects as well as scene composition and presentation. Therefore, these configurations allow significantly new and enhanced multimedia services and systems. In addition, MPEG-4 aims for an advanced functionality, called Adaptive Audio Visual Session (AAVS) or MPEG-J. Adaptive Audio Visual Session (AAVS) (i.e., MPEG-AAVS, MPEG-Java or MPEG-J) requires, in addition to the definition of configurations, a definition of an application programming interface (API) and its organization into Java packages. Also disclosed are concepts leading to definition of such a framework.

Подробнее
04-12-2013 дата публикации

Content adaptive motion compensation filtering for high efficiency video coding

Номер: CN103430545A
Принадлежит:

A system and method for adaptive motion filtering to improve subpel motion prediction efficiency of interframe motion compensated video coding is described. The technique uses a codebook approach that is efficient in search complexity to look-up best motion filter set from a pre-calculated codebook of motion filter coefficient set. In some embodiments, the search complexity is further reduced by partitioning the complete codebook into a small base codebook and a larger virtual codebook, such that the main calculations for search only need to be performed on the base codebook.

Подробнее
13-02-2018 дата публикации

Content adaptive super resolution prediction generation for next generation video coding

Номер: US0009894372B2
Принадлежит: Intel Corporation, INTEL CORP, INTEL CORPORATION

Techniques related to super resolution prediction generation for video coding are described.

Подробнее
16-01-2018 дата публикации

Systems and methods for encoding multimedia content

Номер: US0009870801B2
Принадлежит: INTEL CORPORATION, INTEL CORP

An interactive video/multimedia application (IVM application) may specify one or more media assets for playback. The IVM application may define the rendering, composition, and interactivity of one or more the assets, such as video. Video multimedia application data (IVMA data may) be used to define the behavior of the IVM application. The IVMA data may be embodied as a standalone file in a text or binary, compressed format. Alternatively, the IVMA data may be embedded within other media content. A video asset used in the IVM application may include embedded, content-aware metadata that is tightly coupled to the asset. The IVM application may reference the content-aware metadata embedded within the asset to define the rendering and composition of application display elements and user-interactivity features. The interactive video/multimedia application (defined by the video and multimedia application data) may be presented to a viewer in a player application.

Подробнее
01-06-2017 дата публикации

EFFICIENT, COMPATIBLE, AND SCALABLE INTRA VIDEO/IMAGE CODING USING WAVELETS AND HEVC CODING

Номер: US20170155924A1
Принадлежит:

Techniques related to intra video frame or image coding using wavelets and High Efficiency Video Coding (HEVC) are discussed. Such techniques may include wavelet decomposition of a frame or image to generate subbands and coding the subbands using compliant and/or modified HEVC coding techniques. 1. A computer-implemented method for image or video coding comprising:performing wavelet decomposition on an original image or frame to generate a plurality of subbands;encoding each of the plurality of subbands with an High Efficiency Video Coding (HEVC) compliant encoder to generate a plurality of HEVC compliant bitstreams that are forward compatible with HEVC coding, each associated with a subband of the plurality of subbands; andmultiplexing the plurality of subbands to generate a single scalable bitstream, wherein at least portions of the single scalable bitstream are HEVC compliant.2. The method of further comprising:selecting a wavelet analysis filter set for performing the wavelet decomposition.3. The method of claim 1 , wherein the original image or frame has a bit depth of 8 bits and each of the subbands has a bit depth of 9 bits or wherein the original image or frame has a bit depth of 9 bits and each of the subbands has a bit depth of 10 bits.4. The method of claim 1 , wherein claim 1 , when the subbands have a bit depth of 9 bits claim 1 , the HEVC compliant encoder comprises at least one of a 10 bit intra encoder profile or a 12 bit intra encoder profile and claim 1 , when the subbands have a bit depth of 11 bits claim 1 , the HEVC compliant encoder comprises a 12 bit intra encoder profile.5. The method of claim 1 , wherein performing the wavelet decomposition comprises single level wavelet analysis filtering and the plurality of subbands comprise four subbands.6. The method of claim 5 , wherein the plurality of subbands comprise an LL subband claim 5 , an LH subband claim 5 , an HL subband claim 5 , and an HH subband.7. The method of claim 1 , wherein ...

Подробнее
03-11-2016 дата публикации

SYSTEM, METHOD AND COMPUTER-READABLE MEDIUM FOR ENCODING A SIGNAL INTO MACROBLOCKS

Номер: US20160323580A1
Принадлежит:

A quantizer and dequantizer for use in a video coding system that applies non linear, piece-wise linear scaling functions to video information signals based on a value of a variable quantization parameter. The quantizer and dequantizer apply different non linear, piece-wise linear scaling functions to a DC luminance signal, a DC chrominance signal and an AC chrominance signal. A code for reporting updates of the value of the quantization parameter is interpreted to require larger changes when the quantization parameter initially is large and smaller changes when the quantization parameter initially is small. 1. An encoder comprising:a processor; and receiving a block of data;', (i) determining a first 2-bit code to be the update code, when the first adjustment equals −1;', '(ii) determining a second 2-bit code to be the update code, when the first adjustment equals −2;', '(iii) determining a third 2-bit code to be the update code, when the first adjustment equals 1; or', '(iv) determining a fourth 2-bit code to be the update code, when the first adjustment equals 2; and, 'determining an update code representing a first adjustment to a quantization parameter for the block of data as follows, 'sending the update code to a decoder., 'a computer-readable storage medium storing instructions which, when executed by the processor, cause the processor to perform operations, the operations comprising2. The encoder of claim 1 , wherein the update code is for the decoder to update a value of a previous quantization parameter using the first adjustment.3. The encoder of claim 1 , wherein the first 2-bit code claim 1 , the second 2-bit code claim 1 , the third 2-bit code and the fourth 2-bit code are in a binary format.4. The encoder of claim 3 , wherein the first 2-bit code is a binary value of 00 claim 3 , wherein the second 2-bit code is a binary value of 01 claim 3 , wherein the third 2-bit code is a binary value of 10 claim 3 , and wherein the fourth 2-bit code is a binary ...

Подробнее
28-02-2017 дата публикации

Content adaptive background foreground segmentation for video coding

Номер: US0009584814B2
Принадлежит: Intel Corporation, INTEL CORP

Techniques related to content adaptive background-foreground segmentation for video coding.

Подробнее
17-10-2017 дата публикации

Content adaptive entropy coding of coded/not-coded data for next generation video

Номер: US0009794568B2
Принадлежит: Intel Corporation, INTEL CORP

Techniques related to content adaptive entropy coding of coded/not-coded data are described.

Подробнее
14-11-2017 дата публикации

Content adaptive transform coding for next generation video

Номер: US0009819965B2
Принадлежит: Intel Corporation, INTEL CORP

Techniques related to applying content adaptive and fixed transforms to prediction error data partitions for coding video are discussed. Such techniques may include applying content adaptive transforms having content dependent basis functions to small to medium sized prediction error data partitions and fixed transforms having fixed basis functions to medium to large sized prediction error data partitions.

Подробнее
22-03-2012 дата публикации

SYSTEM, METHOD AND COMPUTER-READABLE MEDIUM FOR ENCODING A SIGNAL INTO MACROBLOCKS

Номер: US20120069900A1
Принадлежит: AT&T INTELLECTUAL PROPERTY II, L.P.

A quantizer and dequantizer for use in a video coding system that applies non linear, piece-wise linear scaling functions to video information signals based on a value of a variable quantization parameter. The quantizer and dequantizer apply different non linear, piece-wise linear scaling functions to a DC luminance signal, a DC chrominance signal and an AC chrominance signal. A code for reporting updates of the value of the quantization parameter is interpreted to require larger changes when the quantization parameter initially is large and smaller changes when the quantization parameter initially is small. 1. A decoder comprising:a processor; determining a quantization parameter from a bitstream;', 'generating a luminance scalar according to a first piece-wise linear transformation of the quantization parameter, wherein:', '(i) the luminance scalar equals 8 whenever the quantization parameter falls within the values 1 through 4, inclusive;', '(ii) the luminance scalar equals 2 times the quantization parameter whenever the quantization parameter falls within the values 5 through 8, inclusive;', '(iii) the luminance scalar equals the quantization parameter plus 8 whenever the quantization parameter falls within the values 9 through 24, inclusive; and', '(iv) the luminance scalar equals 2 times the quantization parameter;', 'inverse quantizing a DC coefficient of a respective luminance block of up to four luminance blocks by the luminance scalar to yield a respective inverse quantized DC coefficient;', 'transforming data of the up to four luminance blocks, including the respective inverse quantized DC coefficient, according to an inverse discrete cosine transform; and', 'merging data of the up to four luminance blocks to generate image data associated with a respective macroblock., 'a computer-readable storage medium storing instructions, which, when processed by the processor, cause the processor to perform a method comprising2. The decoder of claim 1 , wherein the ...

Подробнее
19-04-2012 дата публикации

METHOD AND APPARATUS FOR VARIABLE ACCURACY INTER-PICTURE TIMING SPECIFICATION FOR DIGITAL VIDEO ENCODING

Номер: US20120093228A1
Принадлежит:

A method and apparatus for variable accuracy inter-picture timing specification for digital video encoding is disclosed. Specifically, the present invention discloses a system that allows the relative timing of nearby video pictures to be encoded in a very efficient manner. In one embodiment, the display time difference between a current video picture and a nearby video picture is determined. The display time difference is then encoded into a digital representation of the video picture. In a preferred embodiment, the nearby video picture is the most recently transmitted stored picture. For coding efficiency, the display time difference may be encoded using a variable length coding system or arithmetic coding. In an alternate embodiment, the display time difference is encoded as a power of two to reduce the number of bits transmitted. 120-. (canceled)21. A method for decoding a plurality of video pictures of a video sequence , the method comprising:at a decoder, receiving a bitstream comprising an encoded first video picture and an encoded second video picture, wherein the encoded first video picture comprises at least one bidirectional predicted macroblock and the encoded second video picture comprises no bidirectional predicted macroblocks and at least one unidirectional predicted macroblock that references a macroblock in the encoded first video picture; anddecoding the second video picture by using the first video picture as a reference.22. The method of claim 21 , wherein decoding the second video picture comprises using a motion vector associated with the second video picture that references the first video picture.23. The method of claim 22 , wherein the motion vector for the second video picture is received from the bitstream.24. The method of claim 22 , wherein the motion vector for the second video picture is computed by the decoder.25. The method of claim 24 , wherein the motion vector for the second video picture is interpolated based on a motion vector ...

Подробнее
19-04-2012 дата публикации

METHOD AND APPARATUS FOR VARIABLE ACCURACY INTER-PICTURE TIMING SPECIFICATION FOR DIGITAL VIDEO ENCODING

Номер: US20120093229A1
Принадлежит:

A method and apparatus for variable accuracy inter-picture timing specification for digital video encoding is disclosed. Specifically, the present invention discloses a system that allows the relative timing of nearby video pictures to be encoded in a very efficient manner. In one embodiment, the display time difference between a current video picture and a nearby video picture is determined. The display time difference is then encoded into a digital representation of the video picture. In a preferred embodiment, the nearby video picture is the most recently transmitted stored picture. For coding efficiency, the display time difference may be encoded using a variable length coding system or arithmetic coding. In an alternate embodiment, the display time difference is encoded as a power of two to reduce the number of bits transmitted. 120-. (canceled)21. A method for encoding a sequence of video pictures , said method comprising:encoding a first video picture, a second video picture, a third video picture, a first order value of the first video picture, a second order value of the second video picture, and a third order value of the third video picture, wherein each order value is representative of a position of a video picture in a sequence of video pictures;computing a particular value based on a first order difference value and a second order difference value, wherein (i) the first order difference value is representative of a difference between the third order value and the first order value and (ii) the second order difference value is representative of a difference between the second order value and the first order value;computing a motion vector of the second video picture based on the particular value and a motion vector of the third video picture; andstoring the encoded first video picture, the encoded second video picture, the encoded third video picture, the encoded first order value, the encoded second order value and the encoded third order value in a ...

Подробнее
19-04-2012 дата публикации

METHOD AND APPARATUS FOR VARIABLE ACCURACY INTER-PICTURE TIMING SPECIFICATION FOR DIGITAL VIDEO ENCODING

Номер: US20120093230A1
Принадлежит:

A method and apparatus for variable accuracy inter-picture timing specification for digital video encoding is disclosed. Specifically, the present invention discloses a system that allows the relative timing of nearby video pictures to be encoded in a very efficient manner. In one embodiment, the display time difference between a current video picture and a nearby video picture is determined. The display time difference is then encoded into a digital representation of the video picture. In a preferred embodiment, the nearby video picture is the most recently transmitted stored picture. For coding efficiency, the display time difference may be encoded using a variable length coding system or arithmetic coding. In an alternate embodiment, the display time difference is encoded as a power of two to reduce the number of bits transmitted. 120-. (canceled)21. A method for encoding a sequence of video pictures comprising first , second , and third video pictures , the method comprising:computing a particular value based on a first inter-picture time difference value between the third video picture and the first video picture and a second inter-picture time difference value between the second video picture and the first video picture;computing a motion vector for the second video picture based on the particular value and a motion vector for the third video picture;encoding the second video picture by using the computed motion vector; andstoring the encoded second video picture in a bitstream.22. The method of claim 21 , wherein computing the motion vector for the second video picture comprises multiplying the particular value with the motion vector for the third video picture.23. The method of claim 21 , wherein the particular value is inversely proportional to the first inter-picture time difference value and directly proportional to the second inter-picture time difference value.24. The method of claim 21 , wherein the particular value is computed by dividing the second ...

Подробнее
19-04-2012 дата публикации

METHOD AND APPARATUS FOR VARIABLE ACCURACY INTER-PICTURE TIMING SPECIFICATION FOR DIGITAL VIDEO ENCODING

Номер: US20120093232A1
Принадлежит:

A method and apparatus for variable accuracy inter-picture timing specification for digital video encoding is disclosed. Specifically, the present invention discloses a system that allows the relative timing of nearby video pictures to be encoded in a very efficient manner. In one embodiment, the display time difference between a current video picture and a nearby video picture is determined. The display time difference is then encoded into a digital representation of the video picture. In a preferred embodiment, the nearby video picture is the most recently transmitted stored picture. For coding efficiency, the display time difference may be encoded using a variable length coding system or arithmetic coding. In an alternate embodiment, the display time difference is encoded as a power of two to reduce the number of bits transmitted. 120-. (canceled)21. A method for encoding a plurality of video pictures , said method comprising:encoding more than one instances of a representation of an order value for a particular video picture, wherein the order value is representative of a position of the particular video picture with reference to a nearby video picture; andencoding the particular video picture by using the order value.22. The method of claim 21 , wherein the order value represents a display time difference between the particular video picture and the nearby video picture.23. The method of claim 21 , wherein said order value specifies a display order for the particular video picture in the plurality of video pictures.24. The method of claim 21 , wherein each instance of the encoded representation of the order value is compressed by using variable length coding.25. The method of claim 21 , wherein the nearby video picture is an I video picture that comprises no macroblock that references another video picture.26. The method of claim 21 , wherein the particular video picture is a P video picture that comprises at least one unidirectional predicted macroblock and no ...

Подробнее
19-04-2012 дата публикации

METHOD AND APPARATUS FOR VARIABLE ACCURACY INTER-PICTURE TIMING SPECIFICATION FOR DIGITAL VIDEO ENCODING

Номер: US20120093233A1
Принадлежит:

A method and apparatus for variable accuracy inter-picture timing specification for digital video encoding is disclosed. Specifically, the present invention discloses a system that allows the relative timing of nearby video pictures to be encoded in a very efficient manner. In one embodiment, the display time difference between a current video picture and a nearby video picture is determined. The display time difference is then encoded into a digital representation of the video picture. In a preferred embodiment, the nearby video picture is the most recently transmitted stored picture. For coding efficiency, the display time difference may be encoded using a variable length coding system or arithmetic coding. In an alternate embodiment, the display time difference is encoded as a power of two to reduce the number of bits transmitted. 120-. (canceled)21. A non-transitory computer readable medium storing a bitstream , the bitstream comprising:a plurality of encoded video pictures, wherein a particular video picture is associated with an order value that represents a position of the particular video picture with reference to a nearby video picture; andmore than one instances of an encoded representation of the order value.22. The non-transitory computer readable medium of claim 21 , wherein the order value represents a display time difference between the particular video picture and the nearby video picture.23. The non-transitory computer readable medium of claim 21 , wherein said order value specifies a display order for the particular video picture in the plurality of video pictures.24. The non-transitory computer readable medium of claim 21 , wherein the instances of the encoded representation of the order value are stored in slice headers associated with the particular video picture.25. The non-transitory computer readable medium of claim 21 , wherein the nearby video picture is an I video picture that comprises no macroblock that references another video picture. ...

Подробнее
26-04-2012 дата публикации

Method and apparatus for variable accuracy inter-picture timing specification for digital video encoding

Номер: US20120099640A1
Принадлежит: Individual

A method and apparatus for variable accuracy inter-picture timing specification for digital video encoding is disclosed. Specifically, the present invention discloses a system that allows the relative timing of nearby video pictures to be encoded in a very efficient manner. In one embodiment, the display time difference between a current video picture and a nearby video picture is determined. The display time difference is then encoded into a digital representation of the video picture. In a preferred embodiment, the nearby video picture is the most recently transmitted stored picture. For coding efficiency, the display time difference may be encoded using a variable length coding system or arithmetic coding. In an alternate embodiment, the display time difference is encoded as a power of two to reduce the number of bits transmitted.

Подробнее
26-04-2012 дата публикации

METHOD AND APPARATUS FOR VARIABLE ACCURACY INTER-PICTURE TIMING SPECIFICATION FOR DIGITAL VIDEO ENCODING

Номер: US20120099647A1
Принадлежит:

A method and apparatus for variable accuracy inter-picture timing specification for digital video encoding is disclosed. Specifically, the present invention discloses a system that allows the relative timing of nearby video pictures to be encoded in a very efficient manner. In one embodiment, the display time difference between a current video picture and a nearby video picture is determined. The display time difference is then encoded into a digital representation of the video picture. In a preferred embodiment, the nearby video picture is the most recently transmitted stored picture. For coding efficiency, the display time difference may be encoded using a variable length coding system or arithmetic coding. In an alternate embodiment, the display time difference is encoded as a power of two to reduce the number of bits transmitted. 120-. (canceled)21. A method comprising:encoding a plurality of video pictures, wherein a particular encoded video picture is associated with an order value that represents a position of the particular video picture with reference to a nearby video picture; andencoding a plurality of slice headers associated with the particular video picture, each slice header in the plurality of slice headers comprising an encoded instance of the order value.22. The method of claim 21 , wherein the order value represents a display time difference between the particular video picture and the nearby video picture.23. The method of claim 21 , wherein the nearby video picture is an I video picture that comprises no macroblock that references another video picture.24. The method of claim 21 , wherein said order value specifies a display order for the particular video picture in the plurality of video pictures.25. The method of claim 21 , wherein each encoded instance of the order value is compressed by using variable length coding.26. The method of claim 21 , wherein the particular video picture is a P video picture that comprises at least one ...

Подробнее
26-04-2012 дата публикации

METHOD AND APPARATUS FOR VARIABLE ACCURACY INTER-PICTURE TIMING SPECIFICATION FOR DIGITAL VIDEO ENCODING

Номер: US20120099649A1
Принадлежит:

A method and apparatus for variable accuracy inter-picture timing specification for digital video encoding is disclosed. Specifically, the present invention discloses a system that allows the relative timing of nearby video pictures to be encoded in a very efficient manner. In one embodiment, the display time difference between a current video picture and a nearby video picture is determined. The display time difference is then encoded into a digital representation of the video picture. In a preferred embodiment, the nearby video picture is the most recently transmitted stored picture. For coding efficiency, the display time difference may be encoded using a variable length coding system or arithmetic coding. In an alternate embodiment, the display time difference is encoded as a power of two to reduce the number of bits transmitted. 120-. (canceled)21. A method comprising:receiving a plurality of encoded video pictures, wherein a particular encoded video picture is associated with an order value that represents a position of the particular video picture with reference to a nearby video picture;receiving a plurality of slice headers associated with the particular video picture, each slice header in the plurality of slice headers comprising an encoded instance of the order value; anddecoding the particular video picture by using order value.22. The method of claim 21 , wherein the order value represents a display time difference between the particular video picture and the nearby video picture.23. The method of wherein the nearby video picture is an I video picture that comprises no macroblock that references another video picture.24. The method of claim 21 , wherein said order value specifies a display order for the particular video picture in the plurality of video pictures.25. The method of claim 21 , wherein each encoded instance of the order value is compressed by using variable length coding.26. The method of claim 21 , wherein the particular video picture is a ...

Подробнее
21-06-2012 дата публикации

CONTENT ADAPTIVE MOTION COMPENSATION FILTERING FOR HIGH EFFICIENCY VIDEO CODING

Номер: US20120155533A1
Принадлежит:

A system and method for adaptive motion filtering to improve subpel motion prediction efficiency of interframe motion compensated video coding is described. The technique uses a codebook approach that is efficient in search complexity to look-up best motion filter set from a pre-calculated codebook of motion filter coefficient set. In some embodiments, the search complexity is further reduced by partitioning the complete codebook into a small base codebook and a larger virtual codebook, such that the main calculations for search only need to be performed on the base codebook. 1. A video-encoder-device-implemented method for encoding an adaptive motion-compensation filter set for a plurality of subpel positions for predicting blocks in an encoded video frame , the method comprising:obtaining, by the video encoder device, a codebook comprising a multiplicity of motion-compensation filters grouped into a plurality of subpel-position groups that respectively correspond to the plurality of subpel positions, each of the plurality of subpel-position groups comprising a plurality of motion-compensation filters suitable for interpolating blocks of a picture at a corresponding one of the plurality of subpel positions;obtaining, by the video encoder device, an unencoded frame of video for encoding by the video encoder device;encoding, by the video encoder device, at least a portion of the frame of video to a bitstream, the portion of the frame of video comprising a plurality of blocks of picture content; and selecting from the codebook a subset of the plurality of motion-compensation filters as being well-adapted for predicting the plurality of blocks of picture content, the selected subset comprising one from each of the plurality of subpel-position groups; and', 'including in the bitstream a subpel-filter code identifying the selected subset of the plurality of motion-compensation filters within the codebook., 'during encoding of at least the portion of the frame of video, ...

Подробнее
20-06-2013 дата публикации

METHOD AND APPRATUS TO PRIORITIZE VIDEO INFORMATION DURING CODING AND DECODING

Номер: US20130156102A1
Принадлежит: AT&T INTELLECTUAL PROPERTY II, L.P.

A method and apparatus prioritizing video information during coding and decoding. Video information is received and an element of the video information, such as a visual object, video object layer, video object plane or keyregion, is identified. A priority is assigned to the identified element and the video information is encoded into a bitstream, such as a visual bitstream encoded using the MPEG-4 standard, including an indication of the priority of the element. The priority information can then be used when decoding the bitstream to reconstruct the video information 1. A method comprising:assigning, via a processor, to a video object layer of a video object a video object layer priority code of at least 2 bits which specifies a priority of the video object layer, the video object layer priority code taking values between 1 and 7 inclusive; andencoding the video object.2. The method of claim 1 , further comprising:transmitting an encoded video object in a bitstream, wherein the encoded video object is produced by the encoding of the video object.3. The method of claim 2 , further comprising transmitting a video object layer identifier code in the bitstream claim 2 , wherein the video object layer identifier code indicates whether a priority has been specified for the video object layer.4. The method of claim 3 , wherein the video object layer identifier code comprises an is_video_object_layer_identifier flag and the video object layer priority code comprises a video_object_layer_priority code.5. The method of claim 3 , wherein causal video object planes are assigned to a first video object layer and non-causal video object planes are assigned to a second video object layer.6. The method of claim 3 , wherein intra-coded video object planes and predictive coded video object planes are assigned to a first video object layer and bidirectionally-predictive coded video object planes are assigned to a second video object layer.7. The method of claim 1 , wherein the video ...

Подробнее
04-07-2013 дата публикации

GENERALIZED SCALABILITY FOR VIDEO CODER BASED ON VIDEO OBJECTS

Номер: US20130170563A1
Принадлежит: AT&T INTELLECTUAL PROPERTY II, L.P.

A video coding system that codes video objects as scalable video object layers. Data of each video object may be segregated in to one or more layers. A base layer contains sufficient information to decode a basic representation of the video object. Enhancement layers contain supplementary data regarding the video object that, if decoded, enhance the basic representation obtained from the base layer. The present invention thus provides a coding scheme suitable for use with decoders of varying processing power. A simple decoder may decode only the base layer of the video objects to obtain the basic representation. However, more powerful decoders may decode the base layer data of video objects and additional enhancement layer data to obtain improved decoded output. The coding scheme supports enhancement of both the spatial resolution and the temporal resolution of video object. 1. A video decoding system in which video objects are recognized from video data , wherein instances of a video object at given times are coded as video object planes (VOPs) and VOPs are assigned to one or more video object layers , the video decoding system comprising a decoder structure , the decoder structure further comprising:a base layer decoder having an input for VOP data associated with a first video object layer of the video object;a processor coupled to an output of the base layer decoder; andan enhancement layer decoder, having a first input for VOP data associated with a second video object layer of the video object and a second input coupled to the processor and responsive to predictive coded VOP (P-VOP) data including a ref_select_code included therein, the enhancement layer decoder decoding the P-VOP data with reference to one of:data of a VOP most recently decoded by the enhancement layer decoder;data of a most recent VOP in a display order decoded by the base layer decoder;data of a next VOP in a display order decoded by the base layer decoder; anddata of a temporally ...

Подробнее
15-08-2013 дата публикации

Rendering Color Images and Text

Номер: US20130208002A1
Принадлежит: ADOBE SYSTEMS INCORPORATED

Methods and apparatus, including computer program products, implement techniques for configuring at least a portion of a document for display in a display environment. The techniques include generating a document color palette for all or a portion of an electronic document, where the colors of the document color palette are selected based on colors of a plurality of color containing objects in the document or portion thereof, and generating a plurality of views of the document, two or more of the views being based on different color palettes. The plurality of views includes a document view including each of the plurality of color containing objects, where each color containing object in the document view is represented using the document color palette. 1. A method for rendering an image in a display environment , the method including:receiving an electronic document including multiple views for each of a plurality of graphics objects of the electronic document, a first view for each graphics object being based on a color palette for the graphics object and a second view for each graphics object being based on a document color palette for an associated portion of the electronic document;rendering, using one or more processors, the portion of the electronic document according to the second view of each of the plurality of graphics objects;receiving an input selecting a graphics object displayed in the rendered portion of the electronic document to be viewed separately from the rendered portion of the document; andseparately rendering the selected graphics object according to the first view of the selected graphics object.2. (canceled)3. The method of claim 1 , wherein the portion of the electronic document includes at least one text object claim 1 , each text object including one or more characters of text and associated color content claim 1 , the method further comprising:rendering the at least one text object using the document color palette for the portion of the ...

Подробнее
29-08-2013 дата публикации

SYSTEMS AND METHODS FOR VIDEO/MULTMEDIA RENDERING, COMPOSITION, AND USER-INTERACTIVITY

Номер: US20130227616A1
Автор: Kalva Hari, Puri Atul
Принадлежит:

An interactive video/multimedia application (IVM application) may specify one or more media assets for playback. The IVM application may define the rendering, composition, and interactivity of one or more the assets, such as video. Video multimedia application data (IVMA data may) be used to define the behavior of the IVM application. The IVMA data may be embodied as a standalone file in a text or binary, compressed format. Alternatively, the IVMA data may be embedded within other media content. A video asset used in the IVM application may include embedded, content-aware metadata that is tightly coupled to the asset. The IVM application may reference the content-aware metadata embedded within the asset to define the rendering and composition of application display elements and user-interactivity features. The interactive video/multimedia application (defined by the video and multimedia application data) may be presented to a viewer in a player application. 1. A method for presenting an interactive video/multimedia application , comprising:accessing data describing the interactive video/multimedia application, the data identifying an encoded video asset for playback and a multimedia element, separate from the encoded video asset, to be presented in association with a region of interest of the identified video asset; and identifying a content-aware metadata entry embedded within the encoded video asset,', 'detecting the region of interest within the encoded video asset using the identified, content-aware metadata, and', 'causing the multimedia element to be presented in association with the decoded video asset responsive to detecting the region of interest., 'while the encoded video asset is being decoded for playback in the interactive video/multimedia application2. The method of claim 1 , wherein the encoded video asset comprises a bitstream of encoded video content claim 1 , and wherein the content-aware metadata entry is embedded within the bitstream of encoded ...

Подробнее
12-09-2013 дата публикации

SYSTEM, METHOD AND COMPUTER-READABLE MEDIUM FOR ENCODING A SIGNAL INTO MACROBLOCKS

Номер: US20130235930A1
Принадлежит: AT&T INTELLECTUAL PROPERTY II, L.P.

A quantizer and dequantizer for use in a video coding system that applies non linear, piece-wise linear scaling functions to video information signals based on a value of a variable quantization parameter. The quantizer and dequantizer apply different non linear, piece-wise linear scaling functions to a DC luminance signal, a DC chrominance signal and an AC chrominance signal. A code for reporting updates of the value of the quantization parameter is interpreted to require larger changes when the quantization parameter initially is large and smaller changes when the quantization parameter initially is small.

Подробнее
14-11-2013 дата публикации

Method of content adaptive video encoding

Номер: US20130301703A1
Принадлежит: AT&T Intellectual Property II LP

A method of content adaptive encoding video comprising segmenting video content into segments based on predefined classifications or models. Based on the segment classifications, each segment is encoded with a different encoder chosen from a plurality of encoders. Each encoder is associated with a model. The chosen encoder is particularly suited to encoding the unique subject matter of the segment. The coded bit-stream for each segment includes information regarding which encoder was used to encode that segment. A matching decoder of a plurality of decoders is chosen using the information in the coded bitstream to decode each segment using a decoder suited for the classification or model of the segment. If scenes exist which do not fall in a predefined classification, or where classification is more difficult based on the scene content, these scenes are segmented, coded and decoded using a generic coder and decoder.

Подробнее
30-01-2014 дата публикации

SYSTEMS AND METHODS FOR VIDEO/MULTIMEDIA RENDERING, COMPOSITION, AND USER-INTERACTIVITY

Номер: US20140028721A1
Автор: Kalva Hari, Puri Atul
Принадлежит:

An interactive video/multimedia application (IVM application) may specify one or more media assets for playback. The IVM application may define the rendering, composition, and interactivity of one or more the assets, such as video. Video multimedia application data (IVMA data may) be used to define the behavior of the IVM application. The IVMA data may be embodied as a standalone file in a text or binary, compressed format. Alternatively, the IVMA data may be embedded within other media content. A video asset used in the IVM application may include embedded, content-aware metadata that is tightly coupled to the asset. The IVM application may reference the content-aware metadata embedded within the asset to define the rendering and composition of application display elements and user-interactivity features. The interactive video/multimedia application (defined by the video and multimedia application data) may be presented to a viewer in a player application. 1. An apparatus , comprising:processing logic to automatically detect an object in at least a portion of at least one image frame of video content, and to associate metadata corresponding to the detected object with the image frame, wherein the metadata associates at least one multimedia element with the detected object, and wherein upon rendering of the image frame, the multimedia element is to be overlaid on the detected object based on the metadata; andmemory coupled to the processing logic, the memory to store the image frame.2. The apparatus of claim 1 , wherein the multimedia element is to provide for user interactivity based on the metadata.3. The apparatus of claim 2 , wherein to provide for user interactivity the multimedia element is to provide a user interface element.4. The apparatus of claim 1 , wherein to automatically detect an object comprises to automatically detect at least one of a shape claim 1 , an object claim 1 , a texture claim 1 , an edge claim 1 , or text.5. The apparatus of claim 1 , ...

Подробнее
13-02-2014 дата публикации

Methods and Apparatus for Integrating External Applications into an MPEG-4 Scene

Номер: US20140044196A1
Принадлежит: AT&T INTELLECTUAL PROPERTY II, L.P.

A method of decoding, composing and rendering a scene. First information is obtained, the first information including a part of a MPEG-4 BIFS scene description stream and at least one coded MPEG-4 media stream. The first information is decoded by invoking a BIFS scene decoder and one or more specific media decoders that are required by the scene. Second information is obtained, the second information including a second part of a BIFS scene description stream that contains a reference to an external application. The second information is decoded by invoking the BIFS scene decoder and an external application decoder. An integrated scene is composed, the integrated scene including one or more decoded MPEG-4 media objects and one or more external application objects specified in the decoded scene descriptions streams. The composed integrated scene is rendered on a display. 1. A method comprising:decoding a part of a binary format scene description stream that references a non-MPEG external application object and a pointer to a set of non-MPEG computer-executable instructions associated with the non-MPEG external application object, wherein the non-MPEG external application object is configured to control and render a windowed region within a coded scene according to the set of non-MPEG computer-executable instructions; andcomposing an integrated scene comprising the non-MPEG external application object.2. The method of claim 1 , further comprising decoding first information from the binary format scene description stream using a binary format scene decoder and a specific application decoder associated with a scene description.3. The method of claim 2 , further comprising decoding second information comprising the part of the binary format scene description stream using the binary format scene decoder and an external application decoder.4. The method of claim 1 , wherein the binary format scene description stream conforms to an industry standard.5. The method of claim 4 ...

Подробнее
10-04-2014 дата публикации

VIDEO CODER PROVIDING IMPLICIT COEFFICIENT PREDICTION AND SCAN ADAPTATION FOR IMAGE CODING AND INTRA CODING OF VIDEO

Номер: US20140098863A1
Принадлежит: AT&T INTELLECTUAL PROPERTY II, L.P.

A predictive video coder performs gradient prediction based on previous blocks of image data. For a new block of image data, the prediction determines a horizontal gradient and a vertical gradient from a block diagonally above the new block (vertically above a previous horizontally adjacent block). Based on these gradients, the encoder predicts image information based on image information of either the horizontally adjacent block or a block vertically adjacent to the new block. The encoder determines a residual that is transmitted in an output bitstream. The decoder performs the identical gradient prediction and predicts image information without need for overhead information. The decoder computes the actual information based on the predicted information and the residual from the bitstream. 118.-. (canceled)19. A method of decoding MPEG-4 video signals , the method comprising:receiving, by a processor, a flag indicating that a prediction is to be applied to a block X comprising an 8 pixel by 8 pixel array;determining, by the processor, a direction of the prediction, wherein the direction identifies a neighboring block to the block X that is to be used in the prediction without using data from the block X, wherein the direction comprises a vertical direction or a horizontal direction relative to the block X; and{'sub': 'X', 'generating, by the processor, a DC coefficient DCfor the block X by using a DC coefficient of the neighboring block identified by the direction.'}20. The method of claim 19 , [{'sub': A', 'B, 'determining a first gradient between a DC coefficient DCof a neighboring block A, horizontally adjacent to and left of the block X, and a DC coefficient DCof a neighboring block B, vertically adjacent to and above the neighboring block A;'}, {'sub': B', 'c, 'determining a second gradient between DCand a DC coefficient DCof a neighboring block C vertically adjacent to and above the block X;'}], 'wherein the determining the direction comprises{'sub': 'X', ' ...

Подробнее
05-01-2017 дата публикации

PROJECTED INTERPOLATION PREDICTION GENERATION FOR NEXT GENERATION VIDEO CODING

Номер: US20170006284A1
Автор: Gokhale Neelesh, Puri Atul
Принадлежит:

Techniques related to projected interpolation prediction generation for video coding are described. 130-. (canceled)31. A computer-implemented method for video coding , comprising:receiving two or more input frames;receiving input motion vector data associated with the input frames;generating, at-least part of one output projected interpolation frame, including:generating a projected motion vector field based at least in part on the received two or more input frames as well as the input motion vector data, including:scaling and translating input motion vector fields based at least in part on the input motion vector data as well as determined motion scale factor and translation factors;computing a projection location in a projected interpolation frame based at least in part on the scaled and translated input motion vector fields; andinserting at-least two of the computed scaled and translated motion vectors at the computed projection location; andgenerating the at-least part of one projected interpolation frame based at-least in part on a motion compensated weighted blending of at-least two of the input frames based at least in part on the projected motion vector field.32. The method of claim 31 , further comprising:wherein the receiving of the two or more input frames is via a projected interpolation reference picture subsystem from a decoded picture buffer;wherein the receiving of the input motion vector data associated with the input frames is via the projected interpolation reference picture subsystem;wherein the generating of the at-least part of one output projected interpolation frame is via a projection frame generator module of the projected interpolation reference picture subsystem, including:wherein the generating of the projected motion vector field is via a motion projector module portion of the projection frame generator module; andwherein the generating of the at-least part of one projected interpolation frame based at-least in part on the motion ...

Подробнее
08-01-2015 дата публикации

CONTENT ADAPTIVE TRANSFORM CODING FOR NEXT GENERATION VIDEO

Номер: US20150010048A1
Автор: Gokhale Neelesh, Puri Atul
Принадлежит:

Techniques related to content adaptive transform coding are described. 125-. (canceled)26. A computer-implemented method for video coding , comprising:receiving a prediction error data partition for transform coding;partitioning the prediction error data partition to generate a plurality of coding partitions of the prediction error data partition;performing a content adaptive transform on a first subset of the plurality of coding partitions; andperforming a fixed transform on a second subset of the plurality of coding partitions.27. The method of claim 26 , wherein partitioning the prediction error data partition to generate a plurality of coding partitions comprises partitioning the prediction error data partition using a bi-tree partitioning technique.28. The method of claim 26 , wherein the first subset of the plurality of coding partitions comprise small to medium sized partitions claim 26 , and wherein the second subset of the plurality of coding partitions comprise medium to large sized partitions.29. The method of claim 26 , wherein the first subset of the plurality of coding partitions comprise small to medium sized partitions claim 26 , wherein small to medium sized coding partitions comprise partitions having a height of less than or equal to 16 pixels and a width less than or equal to 16 pixels claim 26 , wherein the second subset of the plurality of coding partitions comprise medium to large sized coding partitions claim 26 , and wherein medium to large sized partitions comprise partitions having a height of greater than or equal to 16 pixels and a width greater than or equal to 16 pixels.30. The method of claim 26 , wherein the first subset of the plurality of coding partitions comprise small to medium sized coding partitions and claim 26 , wherein the second subset of the plurality of coding partitions comprise small to large sized coding partitions claim 26 , the method further comprising:determining a first coding partition from the plurality of ...

Подробнее
08-01-2015 дата публикации

Content adaptive parametric transforms for coding for next generation video

Номер: US20150010062A1
Принадлежит: Intel Corp

Techniques related to content adaptive parametric transforms for coding video are described.

Подробнее
12-01-2017 дата публикации

CONTENT ADAPTIVE DOMINANT MOTION COMPENSATED PREDICTION FOR NEXT GENERATION VIDEO CODING

Номер: US20170013279A1
Автор: Gokhale Neelesh, Puri Atul
Принадлежит: NTEL Corporation

Techniques related to dominant motion compensated prediction for next generation video coding are described. 147-. (canceled)48. A computer-implemented method for video coding , comprising:obtaining frames of pixel data and having a current frame and a decoded reference frame to use as a motion compensation reference frame for the current frame;forming a warped global compensated reference frame by displacing at least one portion of the decoded reference frame by using global motion trajectories;determining a motion vector indicating the motion of the at least one portion and motion from a position based on the warped global compensated reference frame to a position at the current frame; andforming a prediction portion based, at least in part, on the motion vectors and corresponding to a portion on the current frame.49. The method of wherein the at least one portion is a block of pixels used as a unit to divide the current frame and the reference frame into a plurality of the blocks.50. The method of wherein the at least one portion is at least one tile of pixels claim 48 , each tile being at least 64×64 pixels claim 48 , and used as a unit to divide the current frame and the reference frame into a plurality of the tiles;the method comprising grouping tiles together based on common association with an object in the frame to form the at least one portion; and forming a single motion vector for each group of tiles; and grouping the tiles based on a merge map transmittable from an encoder to a decoder.51. The method of wherein the at least one portion is a region of pixels shaped and sized depending on an object associated with the region; andwherein a boundary of the region is at least one of:a shape that resembles the shape of the object associated with the region, anda rectangle placed around the object associated with the region.52. The method of wherein the region is associated with at least one of:a background of the frame,a foreground of the frame, anda moving ...

Подробнее
15-01-2015 дата публикации

CONTENT ADAPTIVE PARTITIONING FOR PREDICTION AND CODING FOR NEXT GENERATION VIDEO

Номер: US20150016523A1
Принадлежит:

Techniques related to content adaptive partitioning for prediction and coding are described. 127.-. (canceled)28. A computer-implemented method for partitioning in video coding , comprising:receiving a video frame;segmenting the video frame into a plurality of tiles, coding units or super-fragments;determining a chosen partitioning technique for at least one tile, coding unit, or super-fragment for prediction or coding partitioning, wherein the chosen partitioning technique comprises a structured partitioning technique comprising at least one of a bi-tree partitioning technique, a k-d tree partitioning technique, a codebook representation of a bi-tree partitioning technique, or a codebook representation of a k-d tree partitioning technique;partitioning the at least one tile, coding unit, or super-fragment into a plurality of prediction partitions using the chosen partitioning technique; andcoding partitioning indicators or codewords associated with the plurality of prediction partitions into a bitstream.29. The method of claim 28 , further comprising:segmenting the video frame into two or more region layers, wherein segmenting the video frame into the plurality of tiles, coding units, or super-fragments comprises segmenting the video frame into the plurality of super-fragments, and wherein the at least one super-fragment comprises an individual region layer of the two or more region layers.30. The method of claim 28 , wherein segmenting the video frame into the plurality of tiles claim 28 , coding units claim 28 , or super-fragments comprises segmenting the video frame into the plurality of tiles.31. The method of claim 28 , further comprising:differencing a plurality of predicted partitions associated with the plurality of prediction partitions with corresponding original pixel data to generate a corresponding plurality of prediction error data partitions;determining an individual prediction error data partition of the plurality of prediction error data partitions ...

Подробнее
05-02-2015 дата публикации

Content adaptive predictive and functionally predictive pictures with modified references for next generation video coding

Номер: US20150036737A1
Принадлежит: Individual

Techniques related to content adaptive predictive and functionally predictive pictures with modified references for next generation video coding are described.

Подробнее
12-02-2015 дата публикации

SYSTEMS AND METHODS FOR VIDEO/MULTIMEDIA RENDERING, COMPOSITION, AND USER-INTERACTIVITY

Номер: US20150042683A1
Автор: Kalva Hari, Puri Atul
Принадлежит: Intel Corporation

An interactive video/multimedia application (IVM application) may specify one or more media assets for playback. The IVM application may define the rendering, composition, and interactivity of one or more the assets, such as video. Video multimedia application data (IVMA data may) be used to define the behavior of the IVM application. The IVMA data may be embodied as a standalone file in a text or binary, compressed format. Alternatively, the IVMA data may be embedded within other media content. A video asset used in the IVM application may include embedded, content-aware metadata that is tightly coupled to the asset. The IVM application may reference the content-aware metadata embedded within the asset to define the rendering and composition of application display elements and user-interactivity features. The interactive video/multimedia application (defined by the video and multimedia application data) may be presented to a viewer in a player application. 1. An apparatus , comprising:processing logic to automatically detect an object in at least a portion of at least one image frame of video content, and to associate metadata corresponding to the detected object with the image frame, wherein the metadata associates at least one multimedia element with the detected object, and wherein upon rendering of the image frame, the multimedia element is to be overlaid on the detected object based on the metadata; andmemory coupled to the processing logic, the memory to store the image frame.2. The apparatus of claim 1 , wherein the multimedia element is to provide for user interactivity based on the metadata.3. The apparatus of claim 2 , wherein to provide for user interactivity the multimedia element is to provide a user interface element.4. The apparatus of claim 1 , wherein to automatically detect an object comprises to automatically detect at least one of a shape claim 1 , an object claim 1 , a texture claim 1 , an edge claim 1 , or text.5. The apparatus of claim 1 , ...

Подробнее
07-02-2019 дата публикации

SCENE CHANGE DETECTION

Номер: US20190042874A1
Принадлежит:

Methods, apparatuses and systems may provide for technology that quickly and accurately detects scene changes by evaluating a current frame based at least in part on a plurality of feature groups. Each of the feature groups may include a plurality of feature values determined from individual features. The individual features may include one or more spatial features of the current frame and one or more temporal features of the current frame as compared with previously evaluated temporal features of a previous reference frame. A determination of whether a scene change has occurred at the current frame may be made based at least in part on a majority vote among the plurality of feature groups. 1. A video processing apparatus , comprising:one or more processors;one or more memory stores communicatively coupled to the one or more processors; evaluate a current frame based at least in part on a plurality of feature groups, wherein each of the feature groups includes a plurality of feature values determined from individual features, wherein the individual features include one or more spatial features of the current frame and one or more temporal features of the current frame as compared with previously evaluated temporal features of a previous reference frame; and', 'determine whether a scene change has occurred at the current frame based at least in part on a majority vote among the plurality of feature groups., 'a scene change detector communicatively coupled to the one or more processors, the scene change detector to2. The apparatus of claim 1 , wherein the one or more temporal features include one or more of the following features: one or more temporal differentials of the spatial features of the current frame as compared with spatial features of the previous reference frame claim 1 , one or more basic temporal features of the current frame as compared to the previous reference frame claim 1 , one or more temporal differentials of the temporal features of the current ...

Подробнее
19-02-2015 дата публикации

METHOD AND APPARATUS FOR VARIABLE ACCURACY INTER-PICTURE TIMING SPECIFICATION FOR DIGITAL VIDEO ENCODING

Номер: US20150049815A1
Принадлежит:

A method and apparatus for variable accuracy inter-picture timing specification for digital video encoding is disclosed. Specifically, the present invention discloses a system that allows the relative timing of nearby video pictures to be encoded in a very efficient manner. In one embodiment, the display time difference between a current video picture and a nearby video picture is determined. The display time difference is then encoded into a digital representation of the video picture. In a preferred embodiment, the nearby video picture is the most recently transmitted stored picture. For coding efficiency, the display time difference may be encoded using a variable length coding system or arithmetic coding. In an alternate embodiment, the display time difference is encoded as a power of two to reduce the number of bits transmitted. 120-. (canceled)21. A method for encoding a sequence of video pictures , said method comprising:encoding a first order value for a first video picture, a second order value for a second video picture, and a third order value for a third video picture, wherein each order value is representative of a position of a video picture in a sequence of video pictures;computing a particular value based on a first order difference value and a second order difference value, wherein (i) the first order difference value is representative of a difference between the third order value and the first order value and (ii) the second order difference value is representative of a difference between the second order value and the first order value;computing a motion vector associated with the second video picture based on the particular value and a motion vector associated with the third video picture;encoding the first, second, and third video pictures, wherein at least one video picture is encoded by using the computed motion vector; andstoring the encoded first video picture, the encoded second video picture, the encoded third video picture, the encoded ...

Подробнее
07-02-2019 дата публикации

GLOBAL MOTION ESTIMATION AND MODELING FOR ACCURATE GLOBAL MOTION COMPENSATION FOR EFFICIENT VIDEO PROCESSING OR CODING

Номер: US20190045192A1
Автор: Puri Atul, Socek Daniel
Принадлежит:

Methods, apparatuses and systems may provide for technology that performs global motion estimation. More particularly, implementations relate to technology that provides accurate global motion compensation in order to improve video processing efficiency. 1. A system to perform efficient motion based video processing using global motion , comprising: obtain a plurality of block motion vectors for a plurality of blocks of a current frame with respect to a reference frame;', 'modify the plurality of block motion vectors, wherein the modification of the plurality of block motion vectors includes one or more of the following operations: smoothing of at least a portion of the plurality of block motion vectors, merging of at least a portion of the plurality of block motion vectors, and discarding of at least a portion of the plurality of block motion vectors;', 'restrict the modified plurality of block motion vectors by excluding a portion of the frame in some instances;', 'compute a plurality of candidate global motion models based on the restricted-modified plurality of block motion vectors for the current frame with respect to the reference frame, wherein each candidate global motion model comprises a set of candidate global motion model parameters representing global motion of the current frame;', 'determine a best global motion model from the plurality of candidate global motion models on a frame-by-frame basis, wherein each best global motion model comprises a set of best global motion model parameters representing global motion of the current frame;', 'modify a precision of the best global motion model parameters in response to one or more application parameters;', 'map the modified-precision best global motion model parameters to a pixel-based coordinate system to determine a plurality of mapped global motion warping vectors for a plurality of reference frame control-grid points;', 'predict and encode the plurality of mapped global motion warping vectors for the ...

Подробнее
07-02-2019 дата публикации

REGION-BASED MOTION ESTIMATION AND MODELING FOR ACCURATE REGION-BASED MOTION COMPENSATION FOR EFFICIENT VIDEO PROCESSING OR CODING

Номер: US20190045193A1
Автор: Puri Atul, Socek Daniel
Принадлежит:

Methods, apparatuses and systems may provide for technology that performs region-based motion estimation. More particularly, implementations relate to technology that provides accurate region-based motion compensation in order to improve video processing efficiency and/or video coding efficiency. 1. A system to perform efficient motion based video processing using region-based motion , comprising: obtain a plurality of block motion vectors for a plurality of blocks of a current frame with respect to a reference frame;', 'modify the plurality of block motion vectors, wherein the modification of the plurality of block motion vectors includes one or more of the following operations: smoothing of at least a portion of the plurality of block motion vectors, merging of at least a portion of the plurality of block motion vectors, and discarding of at least a portion of the plurality of block motion vectors;', 'segment the current frame into a plurality of regions, wherein the regions comprise a background region-type including a background moving region, and comprise a foreground region-type including a single foreground moving region in some instances and a plurality of foreground moving regions in other instances; and, 'a region-based motion analyzer, the region-based motion analyzer including one or more substrates and logic coupled to the one or more substrates, wherein the logic is toa power supply to provide power to the region-based motion analyzer.2. The system of claim 1 , wherein the logic is further to: 'restrict the modified plurality of block motion vectors by excluding a portion of the frame in some instances;', 'prior to the segmentation or the current frame into a plurality of regions compute a plurality of candidate region-based motion models individually for the background region-type and the foreground region-type based on the restricted-modified plurality of block motion vectors for the current frame with respect to the reference frame, wherein each ...

Подробнее
07-02-2019 дата публикации

Reduced Partitioning and Mode Decisions Based on Content Analysis and Learning

Номер: US20190045195A1
Автор: Gokhale Neelesh, Puri Atul
Принадлежит:

Methods, apparatuses and systems may provide for technology that quickly and accurately determines a limited number of partition maps and a limited number of mode subsets. A partition and mode simplification system may include a content analyzer based partitions and mode subset generator system, which itself may include a content analyzer and features generator as well as a partitions and mode subset generator. The content analyzer and features generator may determine a plurality of spatial features and temporal features for a current largest coding unit of a current frame of the video sequence. The partitions and mode subset generator may determine a limited number of partition maps and a limited number of mode subsets for the current largest coding unit of the current frame based at least in part on the spatial features and temporal features. 1. A system to perform efficient video coding , comprising: determine a plurality of spatial features and temporal features for a current largest coding unit of a current frame of the video sequence;', 'determine a limited number of partition maps and a limited number of mode subsets for the current largest coding unit of the current frame based at least in part on the spatial features and temporal features; and', 'perform rate distortion optimization operations during coding of the video sequence, wherein the rate distortion optimization operations have a limited complexity based at least in part on the limited number of partition maps and the limited number of mode subsets., 'a partition and mode simplification analyzer, the partition and mode simplification analyzer including a substrate and logic coupled to the substrate, wherein the logic is to2. The system of claim 1 , wherein the limited number of partition maps are selected to be two partition maps and the limited number of mode subsets are selected to be two modes per partition.3. The system of claim 1 , wherein the limited number of partition maps include a primary ...

Подробнее
07-02-2019 дата публикации

AUTOMATIC ADAPTIVE LONG TERM REFERENCE FRAME SELECTION FOR VIDEO PROCESS AND VIDEO CODING

Номер: US20190045217A1
Принадлежит:

Methods, apparatuses and systems may provide for technology that provides adaptive Long Term Reference (LTR) frame techniques for video processing and/or coding. More particularly, implementations described herein may utilize fast content analysis based Adaptive Long Term Reference (LTR) methods and systems that can reliably decide when to turn LTR on/off, select LTR frames, and/or assign LTR frame quality for higher efficiency and higher quality encoding with practical video encoders. 1. A system to apply an adaptive Long Term Reference to a video sequence , comprising: receive content analysis of stability of the video sequence;', 'receive coding condition of the video sequence;', 'automatically toggle Long Term Reference operations between an on setting mode and an off setting mode based at least in part on the received content analysis and coding condition information, wherein no frames of the video sequence are assigned as Long Term Reference frames and any previously assigned Long Term Reference frames are unmarked when in the off setting mode; and, 'one or more substrates and logic coupled to the one or more substrates, wherein the logic is toa power supply to provide power to the logic.2. The system of claim 1 , wherein the logic is further to:determine a spatial complexity, a temporal complexity, and a ratio of temporal complexity to spatial complexity for each frame of the video sequence; andgenerate content analysis of the stability of the video sequence the based on the spatial complexity, the temporal complexity, and the ratio of temporal complexity to spatial complexity.3. The system of claim 1 , wherein the logic is further to: automatically toggle Long Term Reference operations between the on setting mode and the off setting mode in an AVC encoder.4. The system of claim 1 , wherein the logic is further to: automatically toggle Long Term Reference operations between the on setting mode and the off setting mode in a HEVC encoder.5. The system of claim ...

Подробнее
26-02-2015 дата публикации

SYNTHETIC AUDIOVISUAL DESCRIPTION SCHEME, METHOD AND SYSTEM FOR MPEG-7

Номер: US20150058361A1
Принадлежит:

A method and system for description of synthetic audiovisual content makes it easier for humans, software components or devices to identify, manage, categorize, search, browse and retrieve such content. For instance, a user may wish to search for specific synthetic audiovisual objects in digital libraries, Internet web sites or broadcast media; such a search is enabled by the invention. Key characteristics of synthetic audiovisual content itself such as the underlying 2d or 3d models and parameters for animation of these models are used to describe it. To represent features of synthetic audiovisual content, depending on the description scheme to be used, a number of descriptors are selected and assigned values. The description scheme instantiated with descriptor values is used to generate the description, which is then stored for actual use during query/search. 1. A method comprising:receiving audiovisual data;representing a synthetic feature of the audiovisual data having a defined animation parameter as a descriptor, wherein the descriptor is selected according to a synthetic audiovisual description scheme that specifies a structure and semantics of relationships between components of the audiovisual data;assigning, via a processor, a value to the descriptor based on the synthetic feature; andgenerating a description based on the value of the descriptor, wherein the description comprises spatial and temporal relationships of audiovisual objects in a scene.2. The method of claim 1 , wherein the components comprise other constituent description schemes.3. The method of claim 2 , wherein the other constituent description schemes comprise an animation event description scheme corresponding to dynamic characteristics of the audiovisual data.4. The method of claim 2 , wherein the constituent description schemes comprise an animation object description scheme corresponding to static characteristics of the audiovisual data in the scene.5. The method of claim 2 , wherein ...

Подробнее
05-03-2015 дата публикации

ADVANCED WATERMARKING SYSTEM AND METHOD

Номер: US20150067345A1
Принадлежит:

A method, computer program product, and computing device for obtaining an uncompressed digital media data file. One or more default watermarks is inserted into the uncompressed digital media data file to form a watermarked uncompressed digital media data file. The watermarked uncompressed digital media data file is compressed to form a first watermarked compressed digital media data file. The first watermarked compressed media data file is stored on a storage device. The first watermarked compressed media data file is retrieved from the storage device. The first watermarked compressed digital media data file is modified to associate the first watermarked compressed digital media data file with a transaction identifier to form a second watermarked compressed digital media data file. 1. A method comprising:inserting one or more default watermarks into an uncompressed digital media data file to form a watermarked uncompressed digital media data file;compressing the watermarked uncompressed digital media data file to form a watermarked compressed digital media data file; andmodifying the watermarked compressed digital media data file to associate the watermarked compressed digital media data file with an identifier indicative, at least in part, of an association of the watermarked compressed digital media data file with a specific transaction or content provider, to form a second watermarked compressed digital media data file.2. The method of claim 1 , wherein the identifier comprises a transaction identifier.3. The method of claim 1 , wherein the identifier comprises an apparatus identifier.4. The method of claim 1 , wherein the identifier comprises an indication of the content provider.5. One or more non-transitory computer readable media having a plurality of instructions stored thereon which claim 1 , when executed by one or more processors claim 1 , cause the one or more processors to: compressing, using H.264 encryption, the digital media content to form an ...

Подробнее
10-03-2016 дата публикации

METHOD AND APPARATUS FOR VARIABLE ACCURACY INTER-PICTURE TIMING SPECIFICATION FOR DIGITAL VIDEO ENCODING

Номер: US20160073128A1
Принадлежит:

A method and apparatus for variable accuracy inter-picture timing specification for digital video encoding is disclosed. Specifically, the present invention discloses a system that allows the relative timing of nearby video pictures to be encoded in a very efficient manner. In one embodiment, the display time difference between a current video picture and a nearby video picture is determined. The display time difference is then encoded into a digital representation of the video picture. In a preferred embodiment, the nearby video picture is the most recently transmitted stored picture. For coding efficiency, the display time difference may be encoded using a variable length coding system or arithmetic coding. In an alternate embodiment, the display time difference is encoded as a power of two to reduce the number of bits transmitted. 120-. (canceled)21. A method for decoding a plurality of video pictures , the method comprising:receiving an encoded first video picture, an encoded second video picture and an integer value that is an exponent of a power of two value, said exponent for decoding a display time difference between the second video picture and the first video picture in a sequence of video pictures; andby a decoder, decoding the second video picture by using the display time difference to compute a motion vector for the second video picture.22. The method of claim 21 , wherein the display time difference represents a display order of the second video picture with reference to the first video picture.23. The method of claim 21 , wherein the encoded integer value claim 21 , the encoded first video picture claim 21 , and the encoded second video picture are stored in a bitstream.24. The method of claim 21 , wherein the display time difference is encoded in a slice header associated with the encoded second video picture.25. The method of claim 21 , wherein the display time difference is used as a display order of the second video picture relative to the first ...

Подробнее
19-03-2015 дата публикации

Content adaptive motion compensation filtering for high efficiency video coding

Номер: US20150078448A1
Принадлежит: Intel Corp

A system and method for adaptive motion filtering to improve subpel motion prediction efficiency of interframe motion compensated video coding is described. The technique uses a codebook approach that is efficient in search complexity to look-up best motion filter set from a pre-calculated codebook of motion filter coefficient set. In some embodiments, the search complexity is further reduced by partitioning the complete codebook into a small base codebook and a larger virtual codebook, such that the main calculations for search only need to be performed on the base codebook.

Подробнее
16-03-2017 дата публикации

CONTENT ADAPTIVE IMPAIRMENTS COMPENSATION FILTERING FOR HIGH EFFICIENCY VIDEO CODING

Номер: US20170078659A1
Автор: Puri Atul, Socek Daniel
Принадлежит:

A system and method for quality restoration filtering is described that can be used either in conjunction with video coding, or standalone for postprocessing. It uses wiener filtering approach in conjunction with an efficient codebook representation. 148-. (canceled)49. A system comprising:memory to store at least a portion of a codebook corresponding to a plurality of sets of loop filter coefficients for a loop filter;an interface to receive encoded video data including a syntax to enable selection of a set of loop filter coefficients based on the codebook; and decode at least a portion of the encoded video data into at least a block of quantized coefficients,', 'generate a residual block based on application of an inverse quantization and an inverse transform to the block of quantized coefficients,', 'add, to the residual block, a loop filtering compensated prediction block from a motion compensation predictor to form a decoded video block,', 'select, based on the codebook, the set of loop filter coefficients based on the syntax,', 'process at least a portion of a plurality of decoded blocks including the decoded block based on implementation of the selected loop filter coefficients by the loop filter, and', 'generate decoded video data based on the plurality of decoded blocks., 'a processor to decode the encoded video data, wherein to decode the encoded video data, the processor is to50. The system of claim 49 , wherein the encoded video data further comprises an indicator to indicate the codebook is to be used for loop filtering at least the portion of the decoded blocks.51. The system of claim 49 , wherein the encoded video data further comprises an indicator to indicate the codebook is not to be used for loop filtering at least a second portion of the encoded video data.52. The system of claim 49 , wherein the system comprises one of a computer claim 49 , a set-top box claim 49 , a phone claim 49 , a handheld device claim 49 , or a gaming console.53. The ...

Подробнее
12-06-2014 дата публикации

METHOD AND APPARATUS TO PRIORITIZE VIDEO INFORMATION DURING CODING AND DECODING

Номер: US20140161183A1
Принадлежит: AT&T INTELLECTUAL PROPERTY II, L.P.

A method and apparatus prioritizing video information during coding and decoding. Video information is received and an element of the video information, such as a visual object, video object layer, video object plane or keyregion, is identified. A priority is assigned to the identified element and the video information is encoded into a bitstream, such as a visual bitstream encoded using the MPEG-4 standard, including an indication of the priority of the element. The priority information can then be used when decoding the bitstream to reconstruct the video information 1. A method comprising:identifying, via a processor, a priority of a video object layer in a plurality of video object layers associated with a video object; andassigning to the video object layer a video object layer priority code comprising two bits which specifies the priority, the video object layer priority code taking values between one and seven inclusive.2. The method of claim 1 , further comprising:transmitting an encoded video object in a bitstream, wherein the encoded video object is produced by encoding of the video object.3. The method of claim 2 , further comprising transmitting a video object layer identifier code in the bitstream claim 2 , wherein the video object layer identifier code indicates whether the priority has been specified for the video object layer.4. The method of claim 3 , wherein the video object layer identifier code comprises an is_video_object_layer_identifier flag and the video object layer priority code comprises a video_object_layer_priority code.5. The method of claim 3 , wherein causal video object planes of the video object are assigned to a first video object layer and non-causal video object planes of the video object are assigned to a second video object layer.6. The method of claim 3 , wherein intra-coded video object planes and predictive coded video object planes are assigned to a first video object layer of the video object and bidirectionally-predictive ...

Подробнее
31-03-2016 дата публикации

CONTENT ADAPTIVE TELECINE AND INTERLACE REVERSER

Номер: US20160094803A1
Принадлежит:

Techniques related to processing a mixed content video stream to generate progressive video for encoding and/or display are discussed. Such techniques may include determining conversion techniques for various portions of the mixed content video stream and converting the portions based on the determined techniques. The conversion of true interlaced video include content adaptive interlace reversal and the conversion of pseudo-interlaced telecine converted video may include adaptive telecine pattern reversal. 1. A computer-implemented method for processing video for encoding and/or display comprising:determining a frame format for a frame of a mixed content video stream comprising one or more video formats;determining a frame group format for a frame group of the mixed content video stream, wherein the frame group comprises the frame;determining a conversion technique for the frame group based at least in part on the frame group format; andconverting the frame group to a final progressive format based on the determined conversion technique.2. The method of claim 1 , wherein the video formats of the mixed content video stream comprise at least one of a 60 frames per second progressive format claim 1 , a 30 frames per second progressive format claim 1 , a 30 frames per second true interlaced format claim 1 , or a 30 frames per second pseudo-interlaced telecine converted format.3. The method of claim 1 , wherein determining the frame format comprises content analysis of the frame claim 1 , and wherein the frame format comprises at least one of progressive or interlaced.4. The method of claim 1 , wherein determining the frame format comprises:determining a plurality of descriptors associated with content of the frame;evaluating a plurality of comparison tests based on the plurality of descriptors; anddetermining the frame format based on the comparison tests, wherein the frame format comprises at least one of progressive or interlaced.5. The method of claim 1 , wherein ...

Подробнее
07-04-2016 дата публикации

MULTIMEDIA INTEGRATION DESCRIPTION SCHEME, METHOD AND SYSTEM FOR MPEG-7

Номер: US20160098401A1

The invention provides a system and method for integrating multimedia descriptions in a way that allows humans, software components or devices to easily identify, represent, manage, retrieve, and categorize the multimedia content. In this manner, a user who may be interested in locating a specific piece of multimedia content from a database, Internet, or broadcast media, for example, may search for and find the multimedia content. In this regard, the invention provides a system and method that receives multimedia content and separates the multimedia content into separate components which are assigned to multimedia categories, such as image, video, audio, synthetic and text. Within each of the multimedia categories, the multimedia content is classified and descriptions of the multimedia content are generated. The descriptions are then formatted, integrated, using a multimedia integration description scheme, and the multimedia integration description is generated for the multimedia content. The multimedia description is then stored into a database. As a result, a user may query a search engine which then retrieves the multimedia content from the database whose integration description matches the query criteria specified by the user. The search engine can then provide the user a useful search result based on the multimedia integration description. 1. A method comprising:generating, for an identified multimedia type of multimedia content, multimedia object descriptions from multimedia objects in the multimedia content;generating an integration description scheme which creates, when implemented, relationships between the multimedia object descriptions and non-hierarchical entity relation graph descriptions, wherein the non-hierarchical entity relation graph descriptions relate to communication between the multimedia objects in the multimedia content;generating a description record to represent a portion of the multimedia content by integrating, according to the ...

Подробнее
26-03-2020 дата публикации

HERBICIDAL MIXTURE, COMPOSITION AND METHOD

Номер: US20200095202A1
Автор: Puri Atul
Принадлежит:

Disclosed is a mixture comprising (a) a compound of Formula I and salts thereof wherein A, A, A, R, B, Band Bare defined in the disclosure, and (b) 2-pyridinecarboxylic acid, 4-amino-3-chloro-6-(4-chloro-2-fluoro-3-methoxyphenyl)-5-fluoro-, phenylmethyl ester (i.e. florpyrauxifen-benzyl). Also disclosed is a composition comprising the mixture. Also disclosed is a method of applying the mixture to undesired vegetation comprising contacting the undesired vegetation or its environment with an effective amount of the mixture of the invention. 2. The mixture of wherein Ris methyl claim 1 , ethyl or propyl.3. The mixture of wherein Ris methyl.4. The mixture of wherein the weight ratio of (a) to (b) is from about 1:20 to about 56:1.5Cyperus, Echinochloa, Heteranthera, LeptochloaMonochoria.. The mixture of wherein the mixture controls the growth of weeds from the genus selected from the group consisting and6Cyperus.. The mixture of wherein the mixture controls the growth of weeds from the genus7difformis.. The mixture of wherein the species is8Oryza sativa.. The mixture of wherein the weeds are growing in9. The mixture of further comprising (c) at least on additional active ingredient.10. The mixture of and at least one component selected from the group consisting of surfactants claim 1 , solid diluents and liquid diluents.11. A method for controlling the growth of undesired vegetation comprising contacting the vegetation or its environment with a herbicidally effective amount of a mixture of . This invention relates to a mixture of certain substituted pyrrolidinone compounds and salts thereof, with florpyrauxifen-benzyl, compositions containing them, and methods of their use for controlling undesirable vegetation.The control of undesired vegetation is extremely important in achieving high crop efficiency. Achievement of selective control of the growth of weeds especially in such useful crops as rice, soybean, sugar beet, maize, potato, wheat, barley, tomato and plantation ...

Подробнее
26-06-2014 дата публикации

ADVANCED MULTI-CHANNEL WATERMARKING SYSTEM AND METHOD

Номер: US20140181991A1
Принадлежит:

A method, computer program product, and computing device for modifying a first channel portion of a digital media data file to include at least a first primary watermark. A second channel portion of the digital media data file is modified to include at least a first secondary watermark, wherein the first secondary watermark is the complement of the first primary watermark. 1. A method comprising:modifying a first channel portion of a digital media data file to include at least a primary watermark; andmodifying a second channel portion of the digital media data file to include at least a secondary watermark, wherein the first channel portion and the second channel portion are different portions of the same channel and the secondary watermark is a complement of the primary watermark.2. The method of wherein the primary watermark comprises one or more of:a transaction identifier, an asset identifier, a synchronization word, a speed change word, a space, a content provider identifier, and a distributor identifier.3. The method of wherein the first channel portion of the digital media data file comprises:a left audio channel.4. The method of wherein the second channel portion of the digital media data file comprises:a right audio channel.5. The method of wherein the second channel portion of the digital media data file comprises:a left audio channel.6. The method of wherein first channel portion of the digital media data file comprises:a right audio channel.7. The method of wherein the digital media data file is selected from the group consisting of:an audio file and a digital audio portion of a digital audio-visual file.8. The method of wherein the digital media data file includes at least a third channel portion.9. A computer program product residing on a non-transitory computer readable medium having a plurality of instructions stored thereon which claim 1 , when executed by a processor claim 1 , cause the processor to perform operations comprising:modifying a first ...

Подробнее
30-04-2015 дата публикации

Method and apparatus for variable accuracy inter-picture timing specification for digital video encoding with reduced requirements for division operations

Номер: US20150117541A1
Принадлежит: Apple Inc

A method and apparatus for performing motion estimation in a digital video system is disclosed. Specifically, the present invention discloses a system that quickly calculates estimated motion vectors in a very efficient manner. In one embodiment, a first multiplicand is determined by multiplying a first display time difference between a first video picture and a second video picture by a power of two scale value. This step scales up a numerator for a ratio. Next, the system determines a scaled ratio by dividing that scaled numerator by a second first display time difference between said second video picture and a third video picture. The scaled ratio is then stored calculating motion vector estimations. By storing the scaled ratio, all the estimated motion vectors can be calculated quickly with good precision since the scaled ratio saves significant bits and reducing the scale is performed by simple shifts.

Подробнее
05-05-2016 дата публикации

CONTENT ADAPTIVE PREDICTION DISTANCE ANALYZER AND HIERARCHICAL MOTION ESTIMATION SYSTEM FOR NEXT GENERATION VIDEO CODING

Номер: US20160127741A1
Принадлежит:

Techniques related to content adaptive prediction distance analysis and hierarchical motion estimation for video coding are described. 134-. (canceled)35. A computer-implemented method for calculating prediction distance of a previous frame with respect to a current frame for which motion estimation and compensation for video coding is being performed , comprising:performing spatial analysis to determine a spatial complexity measure as well as horizontal and vertical texture direction data based at least in part on input video data;computing a motion estimate between consecutive frames to determine a motion vector; computing a sum of absolute differences and an average frame difference based at least in part on the determined motion vector;computing a temporal-spatial activity to determine a spatial index as well as a temporal index based at least in part on the sum of absolute differences, the average frame difference, the determined motion vector, the spatial complexity measure as well as the horizontal and vertical texture direction data;performing gain change detection to determine gain based at least in part on the input video data;performing shot change detection to determine a shot change based at least in part on the gain, the spatial index, the temporal index, the spatial complexity measure, as well as the horizontal and vertical texture direction data;performing a prediction distance calculation to determine a final p distance based at least in part on the spatial index, the temporal index, the determined shot change, and the average frame difference, wherein the p distance indicates a distance between frames to be encoded; andcomputing a final motion estimate of a frame with respect to a past frame to determine a final motion vector based at least in part on the final p distance.36. The method of claim 35 , further comprising:performing an estimated motion range analysis to determine an estimated motion direction and estimated motion range prior to ...

Подробнее
02-06-2016 дата публикации

SYSTEMS AND METHODS FOR VIDEO/MULTIMEDIA RENDERING, COMPOSITION, AND USER-INTERACTIVITY

Номер: US20160155478A1
Автор: Kalva Hari, Puri Atul
Принадлежит: Intel Corporation

An interactive video/multimedia application (IVM application) may specify one or more media assets for playback. The IVM application may define the rendering, composition, and interactivity of one or more the assets, such as video. Video multimedia application data (IVMA data may) be used to define the behavior of the IVM application. The IVMA data may be embodied as a standalone file in a text or binary, compressed format. Alternatively, the IVMA data may be embedded within other media content. A video asset used in the IVM application may include embedded, content-aware metadata that is tightly coupled to the asset. The IVM application may reference the content-aware metadata embedded within the asset to define the rendering and composition of application display elements and user-interactivity features. The interactive video/multimedia application (defined by the video and multimedia application data) may be presented to a viewer in a player application. 120-. (canceled)21. A computing device to encode video content including metadata , comprising: determine metadata associated with a region of interest of at least one video frame, wherein the region of interest corresponds to at least one of a shape, an object, a motion vector, and a scene in the video frame;', 'encode the video frame to generate an encoded video frame, wherein the region of interest is encoded at a higher resolution than the rest of the video frame; and', 'embed the metadata according to a bitstream syntax of encoding within a bitstream comprising the encoded video frame, to tightly couple the metadata to the encoded video frame; and, 'processing logic tomemory coupled to the processing logic, the memory to store the at least one video frame.22. The computing device of claim 21 , wherein theat least one of the shape, the object, the motion vector, and the scene in the video frame comprises a human face automatically detected by the processing logic in the region of interest of the at least one ...

Подробнее
16-05-2019 дата публикации

SYSTEMS AND METHODS FOR ADDING CONTENT TO VIDEO/MULTIMEDIA BASED ON METADATA

Номер: US20190147914A1
Автор: Kalva Hari, Puri Atul
Принадлежит: Intel Corporation

An interactive video/multimedia application (IVM application) may specify one or more media assets for playback. The IVM application may define the rendering, composition, and interactivity of one or more the assets, such as video. Video multimedia application data (IVMA data may) be used to define the behavior of the IVM application. The IVMA data may be embodied as a standalone file in a text or binary, compressed format. Alternatively, the IVMA data may be embedded within other media content. A video asset used in the IVM application may include embedded, content-aware metadata that is tightly coupled to the asset. The IVM application may reference the content-aware metadata embedded within the asset to define the rendering and composition of application display elements and user-interactivity features. The interactive video/multimedia application (defined by the video and multimedia application data) may be presented to a viewer in a player application. 1. An apparatus comprising:processing logic to automatically detect an object in at least a portion of at least one image frame of video content, and to associate content with the detected object using metadata, wherein the metadata associates at least one multimedia element with the detected object, wherein the metadata comprises a model of the object, andwherein upon rendering of the image frame, the processing logic is to overlay the multimedia element on the detected object in the image frame based on the model; and; andmemory coupled to the processing logic, the memory to store the image frame.2. The apparatus of claim 1 , wherein to overlay the multimedia element on the detected object claim 1 , the processing logic uses the model of the object to determine positioning of the multimedia object on the detected object.3. The apparatus of claim 1 , wherein the processing logic is further to translate the detected object into the model.4. The apparatus of claim 3 , wherein the model is three dimensional.5. The ...

Подробнее
21-06-2018 дата публикации

CONTENT ADAPTIVE GAIN COMPENSATED PREDICTION FOR NEXT GENERATION VIDEO CODING

Номер: US20180176577A1
Автор: Puri Atul
Принадлежит:

Techniques related to content adaptive gain compensated prediction for next generation video coding are described. 129-. (canceled)30. A computer-implemented method for video coding , comprising:obtaining frames of pixel data and having a current frame and a decoded reference frame to use as a motion compensation reference frame for the current frame;selecting a partition pattern wherein each partition is associated with more than one pixel and among patterns that use a varying number or varying arrangement or both of partitions to form a frame;determining brightness gain compensation values for the reference frame by providing a gain value and an offset value for individual partitions; andapplying locally adaptive gain compensation by adjusting the brightness of a partition of the reference frame and adjusted by the gain compensation value so that multiple gain compensation values are provided for a single frame and depend on the location of the partition within the frame.31. The method of wherein each partition has an average gain value and an average offset value based on claim 30 , at least in part claim 30 , an average of pixel values in the partition.32. The method of wherein the partition pattern is selected by using at least one of:rate distortion optimization (RDO), anda method that balances quality of image with bit cost for using a particular partition.33. The method of wherein multiple partitions are tested to determine which partition results in a best balance of quality and bit cost claim 30 , and based claim 30 , at least in part claim 30 , on distortion between a partition on the current frame and the corresponding partition on the reference frame claim 30 , a bitrate associated with the gain and offset values claim 30 , and the quantizer used for transform coefficients.35. The method of wherein the partition pattern is selected from an index of predetermined partition patterns.36. The method of wherein the index comprises index numbers each ...

Подробнее
06-07-2017 дата публикации

System and Method of Filtering Noise

Номер: US20170195691A1
Принадлежит:

A system and method of removing noise in a bitstream is disclosed. Based on the segment classifications of a bitstream, each segment or portion is encoded with a different encoder associated with the portion model and chosen from a plurality of encoders. The coded bitstream for each segment includes information regarding which encoder was used to encode that segment. A circuit for removing noise in video content includes a first filter connected to a first input switch and a first output switch, the first filter being in parallel with a first pass-through line, a second filter connected to a second input switch and a second output switch, the second filter connected in parallel with a second pass-through line, and a third filter connected to a third input switch in a third output switch. 1. A method comprising:receiving data comprising a plurality of data portions, wherein a respective data portion of the plurality of data portions is chronologically distinct from other data portions of the plurality of data portions;identifying, via a processor, a classification parameter for the respective data portion;classifying the respective data portion into a respective content classification according to the classification parameter for the respective data portion;encoding the respective data portion using a respective encoder from a plurality of encoders according to the respective content classification, to yield encoded data; andmultiplexing the encoded data with an encoded filter description.2. The method of claim 1 , further comprising:identifying filters associated with the respective data portion, to yield a filter descriptor; andencoding the filter description, to yield the encoded filter description.3. The method of claim 1 , wherein the data comprises video data.4. The method of claim 1 , wherein the respective content classification comprises at least one of a person claim 1 , an object claim 1 , an action claim 1 , a location claim 1 , a time scene change claim ...

Подробнее
14-07-2016 дата публикации

CONTENT ADAPTIVE TELECINE AND INTERLACE REVERSER

Номер: US20160205343A1
Принадлежит:

Techniques related to processing a mixed content video stream to generate progressive video for encoding and/or display are discussed. Such techniques may include determining conversion techniques for various portions of the mixed content video stream and converting the portions based on the determined techniques. The conversion of true interlaced video include content adaptive interlace reversal and the conversion of pseudo-interlaced telecine converted video may include adaptive telecine pattern reversal. 1. A computer-implemented method for processing video for encoding and/or display comprising:receiving a mixed content video stream comprising a plurality of video formats comprising at least a true interlaced format and a pseudo-interlaced telecine converted format;determining a first conversion technique for a first segment of the mixed content video stream having the true interlaced format and a second conversion technique for a second segment of the mixed content video stream having the telecined format, wherein the first and second conversion techniques are different; andconverting the mixed content video stream to a progressive video stream based at least in part on the first conversion technique and the second conversion technique.2. The method of claim 1 , wherein the first conversion technique comprises a content adaptive deinterlacer and the second conversion technique comprises an adaptive telecine pattern reverser technique.3. The method of claim 1 , wherein determining the first conversion technique comprises determining a first frame format of a first frame of the first segment and a first frame group format of the first segment.4. The method of claim 3 , wherein determining the first frame format comprises:determining a plurality of descriptors associated with content of the first frame;evaluating a plurality of comparison tests based on the plurality of descriptors; anddetermining the first frame format based on the comparison tests, wherein the ...

Подробнее
19-07-2018 дата публикации

Content adaptive quality restoration filtering for next generation video coding

Номер: US20180205968A1
Автор: Atul Puri, Daniel Socek
Принадлежит: Intel Corp

Techniques related to quality restoration filtering for video coding are described.

Подробнее
09-10-2014 дата публикации

System and Method For Distributing Rights-Protected Content

Номер: US20140304762A1
Автор: Puri Atul K.
Принадлежит:

Various embodiments of a method and system for a content distribution mechanism. A content distribution mechanism is implemented to receive rights-protected content. Access to the rights-protected content is controlled according to a policy via a policy server. The distribution mechanism may receive an attempt to forward the rights-protected content to one or more recipients that do not currently have access to the rights-protected content. The distribution mechanism may hold the document and send a message requesting access rights to the rights-protected content for the recipient(s). In some embodiments, the distribution mechanism may send the message to a policy server. In other embodiments, the distribution mechanism may send the message to a policy administrator. Upon receiving acknowledgement that the recipient(s) have been granted access rights to the content, the distribution mechanism may forward the rights-protected content to the recipient(s). 1. A computer-implemented method , comprising:receiving, by a distribution mechanism, rights-protected content for a recipient who has access rights to the rights-protected content;receiving a request, from the recipient, to forward the received rights-protected content through the distribution mechanism to another recipient, wherein access rights for the rights-protected content are controlled according to a policy via a policy server;in response to determining that the another recipient does not have access rights to the rights-protected content, the distribution mechanism holding the content and sending a message to an entity authorized to grant access rights for the rights-protected content to request access rights to the rights-protected content for the another recipient; and releasing said holding the rights-protected content, and', 'forwarding the rights-protected content to the another recipient., 'upon receiving acknowledgement that the another recipient has been granted access rights to the rights-protected ...

Подробнее
13-08-2015 дата публикации

CONTENT ADAPTIVE ENTROPY CODING FOR NEXT GENERATION VIDEO

Номер: US20150229926A1
Автор: Puri Atul
Принадлежит:

Techniques related to content adaptive entropy coding are described. A technique for video coding may include obtaining first and second video data for entropy encoding such that the first and second video data comprise different data types, determining a first entropy encoding technique for the first video data such that the first entropy encoding technique comprises at least one of an adaptive symbol-run video length coding technique or an adaptive proxy variable length coding technique, entropy encoding the first video data using the first encoding technique to generate first compressed video data and the second video data using a second encoding technique to generate second compressed video data, and assembling the first compressed video data and the second compressed video data to generate an output bitstream. 1. A computer-implemented method for video coding , comprising:obtaining first video data and second video data for entropy encoding, wherein the first video data and the second video data comprise different data types;determining a first entropy encoding technique for the first video data, wherein the first entropy encoding technique comprises at least one of an adaptive symbol-run variable length coding technique or an adaptive proxy variable length coding technique;entropy encoding the first video data using the first encoding technique to generate first compressed video data and the second video data using a second encoding technique to generate second compressed video data; andassembling the first compressed video data and the second compressed video data to generate an output bitstream.2. The method of claim 1 , wherein the adaptive symbol-run variable length coding technique comprises converting the first video data from bit map data to at least one of an inverted bitmap claim 1 , a differential bit map claim 1 , or a gradient predictive bit map before applying adaptive symbol-run variable length coding.3. The method of claim 1 , wherein the ...

Подробнее
13-08-2015 дата публикации

VIDEO CODEC ARCHITECTURE FOR NEXT GENERATION VIDEO

Номер: US20150229948A1
Автор: Gokhale Neelesh, Puri Atul
Принадлежит:

Techniques related to video codec architecture for next generation video are described. 133-. (canceled)34. A computer-implemented method for video coding , comprising:segmenting a first video frame of a first type and a second video frame of a second type into a first plurality of tiles or super-fragments and a second plurality of tiles or super-fragments;partitioning the first plurality of tiles or super-fragments using a first partitioning technique and the second plurality of tiles or super-fragments using a second partitioning technique, wherein the first and second partitioning techniques are different;determining a selected prediction partitioning and an associated plurality of prediction partitions for the second video frame;generating a first reconstructed tile or super-fragment;generating deblock filtering parameters and a first final reconstructed tile or super-fragment based at least in part on the first reconstructed tile or super-fragment and the deblock filtering parameters;assembling the first final reconstructed tile or super-fragment and a second final reconstructed tile or super-fragment to generate a first decoded prediction reference picture;generating morphing characteristic parameters and a morphed prediction reference picture based at least in part on the morphing characteristic parameters and the first decoded reference picture;generating synthesizing characteristic parameters and a synthesized prediction reference picture based at least in part on the synthesizing characteristic parameters and the first decoded reference picture or a second decoded reference picture;determining a mode and a reference type for each of the plurality of prediction partitions;generating motion data associated with the plurality of prediction partitions based at least in part on one of the morphed prediction reference picture or the synthesized prediction reference picture;performing motion compensation based on the motion data and at least one of the morphed ...

Подробнее
27-08-2015 дата публикации

METHOD AND APPARATUS TO PRIORITIZE VIDEO INFORMATION DURING CODING AND DECODING

Номер: US20150245047A1
Принадлежит:

A method and apparatus prioritizing video information during coding and decoding. Video information is received and an element of the video information, such as a visual object, video object layer, video object plane or keyregion, is identified. A priority is assigned to the identified element and the video information is encoded into a bitstream, such as a visual bitstream encoded using the MPEG-4 standard, including an indication of the priority of the element. The priority information can then be used when decoding the bitstream to reconstruct the video information 1. A method comprising:identifying, via a processor, a priority of a video object layer in a plurality of video object layers associated with a video object; andtransmitting an encoded video object in a bitstream, wherein the encoded video object comprises a three bit priority code based on the priority of the video object layer.2. The method of claim 1 , further comprising transmitting a video object layer identifier code in the bitstream claim 1 , wherein the video object layer identifier code indicates whether the priority has been specified for the video object layer.3. The method of claim 2 , wherein the video object layer identifier code comprises an is_video_object_layer identifier flag and the video object layer priority code comprises a video_object_layer_priority code.4. The method of claim 2 , wherein causal video object planes of the video object are assigned to a first video object layer and non-causal video object planes of the video object are assigned to a second video object layer.5. The method of claim 2 , wherein intra-coded video object planes and predictive coded video object planes are assigned to a first video object layer of the video object and bidirectionally-predictive coded video object planes are assigned to a second video object layer of the video object.6. The method of claim 1 , further comprising:assigning a priority to a region of the video object, to yield a regional ...

Подробнее
06-11-2014 дата публикации

CONTENT ADAPTIVE FUSION FILTERING OF PREDICTION SIGNALS FOR NEXT GENERATION VIDEO CODING

Номер: US20140328387A1
Автор: Puri Atul, Socek Daniel
Принадлежит:

Techniques related to fusion improvement filtering of prediction signals for video coding are described. 1. A computer-implemented method for video coding , comprising:determining, via a prediction analyzer and prediction fusion filtering module, a fusion filter for at least a fusion filtering partition;generating, via the prediction analyzer and prediction fusion filtering module, a fusion filtered predicted picture based on fusion filtering at least a portion of the fusion filtering partition;generating, via the prediction analyzer and prediction fusion filtering module, header and overhead data indicating a fusion filter shape and a fusion filter prediction method associated with the fusion filter; andencoding, via an adaptive entropy encoder module, the header and overhead data into a bitstream.2. The method of claim 1 , wherein the fusion filter shape comprises a plurality of sparsely separated fusion filter coefficients.3. The method of claim 1 , wherein the fusion filter shape comprises at least one of a discontinuous diamond shape claim 1 , a diamond shape with holes claim 1 , a diamond shape having a checkered pattern claim 1 , a discontinuous diamond shape having maximum dimensions of 5 pixels by 5 pixels claim 1 , a discontinuous diamond shape with corner points claim 1 , a quincunx shape claim 1 , a square checkered pattern claim 1 , a square checkered pattern having a size of 5 pixels by 5 pixels claim 1 , a continuous diamond shape with corner points claim 1 , a continuous diamond shape with corner points having maximum dimensions of 5 pixels by 5 pixels claim 1 , a continuous diamond shape with corner points and intermediate points claim 1 , or a continuous diamond shape with corner points and intermediate points extending beyond the corner points such that maximum dimensions of the first filter shape are 7 pixels by 7 pixels.4. The method of claim 1 , further comprising:determining, via the prediction analyzer and prediction fusion filtering module, ...

Подробнее
06-11-2014 дата публикации

Content adaptive super resolution prediction generation for next generation video coding

Номер: US20140328400A1
Принадлежит: Atul Puri, Neelesh N. Gokhale

Techniques related to super resolution prediction generation for video coding are described.

Подробнее
06-11-2014 дата публикации

CONTENT ADAPTIVE QUALITY RESTORATION FILTERING FOR NEXT GENERATION VIDEO CODING

Номер: US20140328414A1
Автор: Puri Atul, Socek Daniel
Принадлежит:

Techniques related to quality restoration filtering for video coding are described. 1. A computer-implemented method for video coding , comprising:determining, via a quality analyzer and quality restoration filtering module, a first quality restoration filter for a first partition of a reconstructed picture and a second quality restoration filter for a second partition of the reconstructed picture, wherein at least one of a shape or filter coefficients are different between the first and second quality restoration filters;applying, via the quality analyzer and quality restoration filtering module, the first quality restoration filter to at least a portion of the first partition and the second quality restoration filter to at least a portion of the second partition to generate a final reconstructed picture; andstoring the final reconstructed picture in a picture buffer.2. The method of claim 1 , wherein the first quality restoration filter is partially half symmetric such that at least a portion of a first plurality of coefficients of the first quality restoration filter are half symmetric and a second portion of the first plurality of coefficients are not half symmetric.3. The method of claim 1 , wherein the first quality restoration filter has a first shape comprising at least one of a substantially diamond shape or a rectangular shape.4. The method of claim 1 , further comprising:determining, via the quality analyzer and quality restoration filtering module, whether to apply no filter to a third partition of the reconstructed picture, to apply a third quality restoration filter with codebook determined coefficients to the third partition, or to apply the third quality restoration filter with encoder determined coefficients to the third partition, wherein the determining is based on a rate distortion optimization, wherein the rate distortion optimization determines a minimum of a no filter rate distortion based on a sum of absolute differences for no filter, a ...

Подробнее
27-11-2014 дата публикации

METHOD OF CONTENT ADAPTIVE VIDEO ENCODING

Номер: US20140348228A1
Принадлежит:

A method of content adaptive encoding video comprising segmenting video content into segments based on predefined classifications or models. Based on the segment classifications, each segment is encoded with a different encoder chosen from a plurality of encoders. Each encoder is associated with a model. The chosen encoder is particularly suited to encoding the unique subject matter of the segment. The coded bit-stream for each segment includes information regarding which encoder was used to encode that segment. A matching decoder of a plurality of decoders is chosen using the information in the coded bitstream to decode each segment using a decoder suited for the classification or model of the segment. If scenes exist which do not fall in a predefined classification, or where classification is more difficult based on the scene content, these scenes are segmented, coded and decoded using a generic coder and decoder. 1. A method comprising:receiving header information for a predefined content model, the predefined content model associated with a sub-portion of a full frame of video content and not a remainder portion of the full frame; anddecoding, via a processor, the sub-portion differently than the remainder portion, wherein the decoding is based on the header information and the predefined content model.2. The method of claim 1 , wherein the sub-portion is rectangular.3. The method of claim 2 , wherein the sub-portion is identified on a frame-by-frame basis.4. The method of claim 1 , wherein the sub-portion is positioned in a top left corner of the full frame.5. The method of claim 1 , wherein the predefined content model causes the decoding to adaptively dequantize a region of interest.6. The method of claim 5 , wherein the region of interest comprises an arbitrary shaped object in a portion of the full frame claim 5 , where the portion is smaller than the full frame.7. The method of claim 5 , wherein region of interest information indicating the region of ...

Подробнее
11-12-2014 дата публикации

Method and Apparatus For Improved Coding Mode Selection

Номер: US20140362903A1
Принадлежит:

In this disclosure, a novel method for direct mode enhancement in B-pictures and skip mode enhancement in P-pictures in the framework of H.264 (MPEG-4/Part 10) is disclosed. Direct mode and skip mode enhancements are achieved by clustering the values of the Lagrangian, removing outliers and specifying smaller values of the Lagrangian multiplier in the rate-distortion optimization for encoding mode selection. Experimental results using high quality video sequences show that bit rate reduction is obtained using the method of the present invention, at the expense of a slight loss in peak signal-to-noise ratio (PSNR). By conducting two different experiments, it has been verified that no subjective visual loss is visible despite the peak signal-to-noise ratio change. In relationship to the existing rate-distortion optimization methods currently employed in the (non-normative) MPEG-4/Part 10 encoder, the method of the present invention represents a simple and useful add-on. More importantly, when other solutions such as further increasing the values of the quantization parameter are not applicable, as inadmissible artifacts would be introduced in the decoded pictures, the method of the present invention achieves bit rate reduction without introducing visible distortion in the decoded sequences. Even more, despite the fact that the present document makes use of the H.264 framework, the proposed method is applicable in any video encoding system that employs rate-distortion optimization for encoding mode selection. 119-. (canceled)20. A method for selecting an encoding mode from a plurality of encoding modes , the method comprising: encoding and decoding a particular array of pixels with the encoding mode;', 'computing error values between pixels in the particular array of pixels and corresponding pixels in a decoded array of pixels;', 'from the particular array of pixels, selecting a subset of pixels that have error values that satisfy a threshold;', 'for the particular ...

Подробнее
11-12-2014 дата публикации

GENERALIZED SCALABILITY FOR VIDEO CODER BASED ON VIDEO OBJECTS

Номер: US20140362907A1
Принадлежит:

A video coding system that codes video objects as scalable video object layers. Data of each video object may be segregated in to one or more layers. A base layer contains sufficient information to decode a basic representation of the video object. Enhancement layers contain supplementary data regarding the video object that, if decoded, enhance the basic representation obtained from the base layer. The present invention thus provides a coding scheme suitable for use with decoders of varying processing power. A simple decoder may decode only the base layer of the video objects to obtain the basic representation. However, more powerful decoders may decode the base layer data of video objects and additional enhancement layer data to obtain improved decoded output. The coding scheme supports enhancement of both the spatial resolution and the temporal resolution of video object. 1. A method for generating a coded video signal , comprising:identifying, by a processor, a video object from video data, wherein an instance of the video object at a given time is deemed a video object plane;determining, by the processor, one or more video object layers associated with the video object plane; and coding a video object layer start code that marks one of the one or more video object layers as a new video object layer and a video object layer identification field to identify the new video object layer; and', 'coding the video object plane associated with the new video object layer., 'generating, by the processor, the coded video signal, comprising2. The method of claim 1 , further comprising:generating a scalability flag for identifying whether scalable coding is used.3. The method of claim 2 , further comprising:generating a ref_layer_id field to indicate that the new video object layer is to be used as a reference for prediction when scalable coding is used.4. The method of claim 2 , further comprising:generating a ref_layer_sampling_direc flag to indicate whether a second video ...

Подробнее
11-12-2014 дата публикации

MATCHED FILTERING OF PREDICTION AND RECONSTRUCTION SIGNALS FOR NEXT GENERATION VIDEO

Номер: US20140362911A1
Автор: Puri Atul
Принадлежит:

Techniques related to matched filtering of prediction and reconstruction signals for video coding are described. 1. A computer-implemented method for video coding , comprising:partitioning a plurality of tiles or super-fragments of a video frame to generate a plurality of prediction partitions;differencing a plurality of predicted partitions associated with the plurality of prediction partitions to generate a corresponding plurality of prediction error data partitions;partitioning at least a portion of the prediction error data partitions to generate a plurality of coding partitions;performing one or more transforms on the plurality of coding partitions to generate transform coefficients, wherein the one or more transforms comprise at least one content adaptive transform;quantizing the transform coefficients to generate quantized transform coefficients;performing an inverse quantization on the quantized transform coefficients to generate reconstructed transform coefficients;performing one or more inverse transforms on the reconstructed transform coefficients to generate a plurality of reconstructed coding partitions;assembling the plurality of reconstructed coding partitions to generate a plurality of reconstructed prediction error data partitions;adding the plurality of predicted partitions to the plurality of reconstructed prediction error data partitions to generate at least one reconstructed tile or super-fragment;generating deblock filtering parameters and a first final reconstructed tile or super-fragment based at least in part on the at least one reconstructed tile or super-fragment and the deblock filtering parameters;assembling the first final reconstructed tile or super-fragment and a second final reconstructed tile or super-fragment to generate a reconstructed picture; andgenerating quality restoration in-loop filtering parameters and a reconstructed prediction reference picture based at least in part on the reconstructed picture and the quality ...

Подробнее
11-12-2014 дата публикации

CONTENT ADAPTIVE MOTION COMPENSATED PRECISION PREDICTION FOR NEXT GENERATION VIDEO CODING

Номер: US20140362921A1
Принадлежит:

Techniques related to adaptive precision and filtering motion compensation for video coding are described. 1. A computer-implemented method for video coding , comprising:determining, via a motion compensated filtering predictor module, a motion compensation prediction precision associated with at least a portion of a current picture being decoded, wherein the motion compensation prediction precision comprises at least one of a quarter pel precision or an eighth pel precision;generating, via the motion compensated filtering predictor module, predicted pixel data of a predicted partition associated with a prediction partition of the current picture by filtering a portion of a decoded reference picture based at least in part on the motion compensation prediction precision; andcoding, via an entropy encoder, prediction partitioning indicators associated with the prediction partition and a motion vector indicating a positional difference between the prediction partition and an associated partition of the decoded reference picture into a bitstream.2. The method of claim 1 , further comprising:partitioning, via a prediction partitions generator module, the current picture into a plurality of prediction partitions comprising the prediction partition based on a partitioning technique comprising at least one of a bi-tree partitioning technique, a k-d tree partitioning technique, a codebook representation of a bi-tree partitioning technique, or a codebook representation of a k-d tree partitioning technique, wherein the portion of the current picture comprises the prediction partition; andcoding, via the entropy encoder, motion compensation prediction precision indicators comprising a first indicator indicating whether the motion compensation prediction precision for the prediction partition comprises the quarter pel precision or the eighth pel precision into the bitstream.3. The method of claim 1 , further comprising:partitioning, via a prediction partitions generator module, ...

Подробнее
11-12-2014 дата публикации

CONTENT ADAPTIVE PREDICTION AND ENTROPY CODING OF MOTION VECTORS FOR NEXT GENERATION VIDEO

Номер: US20140362922A1
Принадлежит:

Techniques related to content adaptive prediction and entropy coding of motion vectors are described. 1. A computer-implemented method for video coding , comprising:obtaining motion vector data comprising a plurality of motion vectors for a video frame;determining a first predicted motion vector using a median motion vector prediction technique for each motion compensated block of the video frame;determining a second predicted motion vector using a selected motion vector prediction technique for each motion compensated block;selecting the first or the second predicted motion vector for each motion compensated block and defining a motion vector bit mask identifying the median or the selected motion vector prediction technique for each motion compensated block;determining a selected coding method for the motion vector bit mask; andencoding the motion vector bit mask into a bitstream based on the selected coding method.2. The method of claim 1 , further comprising:determining a frame level motion vector prediction method based on a bit cost analysis of a plurality of frame level motion vector prediction methods, wherein the frame level motion vector prediction method defines the selected motion vector prediction technique;differencing the plurality of motion vectors with an associated selected predicted motion vector for each motion compensated block to generate a plurality of motion vector differences; andcoding a frame level motion vector prediction method header, a selected coding method header, and an encoded payload comprising the plurality of motion vector differences into a bitstream.3. A computer-implemented method for video coding claim 1 , comprising:determining a selected motion vector prediction technique for a block of video data from a plurality of motion vector prediction techniques;generating a prediction motion vector for the block of video data based at least in part on the selected motion vector prediction technique;differencing the prediction motion ...

Подробнее
01-10-2015 дата публикации

CONTENT ADAPTIVE, CHARACTERISTICS COMPENSATED PREDICTION FOR NEXT GENERATION VIDEO

Номер: US20150281716A1
Автор: Gokhale Neelesh, Puri Atul
Принадлежит:

Techniques related to content adaptive, characteristics compensated prediction for video coding are described. 132.-. (canceled)33. A computer-implemented method for video coding , comprising:generating a first decoded prediction reference picture and a second decoded prediction reference picture;generating, based at least in part on the first decoded prediction reference picture, a first modified prediction reference picture and first modifying characteristic parameters associated with the first modified prediction reference picture;generating, based at least in part on the second decoded prediction reference picture, a second modified prediction reference picture and second modifying characteristic parameters associated with the second modified prediction reference picture, wherein the second modified reference picture is of a different type than the first modified reference picture;generating motion data associated with a prediction partition of a current picture based at least in part on one of the first modified prediction reference picture or the second modified prediction reference picture; andperforming motion compensation based at least in part on the motion data and at least one of the first modified prediction reference picture or the second modified prediction reference picture to generate predicted partition data for the prediction partition.34. The method of claim 33 , further comprising:differencing the predicted partition data with original pixel data associated with the prediction partition to generate a prediction error data partition;partitioning the prediction error data partition to generate a plurality of coding partitions;performing a forward transform on the plurality of coding partitions to generate transform coefficients associated with the plurality of coding partitions;quantizing the transform coefficients to generate quantized transform coefficients; andentropy encoding the quantized transform coefficients, the first modifying ...

Подробнее
08-10-2015 дата публикации

CONTENT ADAPTIVE IMPAIRMENTS COMPENSATION FILTERING FOR HIGH EFFICIENCY VIDEO CODING

Номер: US20150288964A1
Автор: Puri Atul, Socek Daniel
Принадлежит:

A system and method for quality restoration filtering is described that can be used either in conjunction with video coding, or standalone for postprocessing. It uses wiener filtering approach in conjunction with an efficient codebook representation. 148.-. (canceled)49. A method for decoding an encoded video frame implemented by a video decoder device comprising a processor and a memory , the method comprising:obtaining, by the video decoder device a compressed bitstream comprising a frame of motion compensated encoded video encoded at least in part according to a motion compensation process;decoding from the compressed bitstream a coded partition map comprising a prediction block size and a transform block size;decoding a quantized coefficient block of transform coefficients of the transform block size and a differential motion vector of the prediction block size from the compressed bitstream;inverse quantizing the quantized coefficient block into a set of de-quantized coefficients and inverse transforming the set of de-quantized coefficients into a decoded residual block;adding a first motion compensated prediction block of the prediction block size to the decoded residual block to form a decoded video block, which first motion compensated prediction block is obtained using a motion vector determined from the differential motion vector and a reference frame of decoded video;deblock filtering a set of decoded video blocks and assembling the reference frame of decoded video; anddetermining whether to apply an impairments-compensation filter to at least a portion of the frame of decoded video to form an impairment-compensated reconstructed frame of video from the frame of decoded video.50. The method of claim 49 , further comprising decoding an impairment-compensation flag from the compressed bitstream and determining claim 49 , based on the flag claim 49 , to apply the impairments-compensation filter to at least the portion of the frame of decoded video to form the ...

Подробнее
27-09-2018 дата публикации

CONTENT ADAPTIVE IMPAIRMENT COMPENSATION FILTERING FOR HIGH EFFICIENCY VIDEO CODING

Номер: US20180278933A1
Автор: Puri Atul, Socek Daniel
Принадлежит: Intel Corporation

A system and method for quality restoration filtering is described that can be used either in conjunction with video coding, or standalone for postprocessing. It uses wiener filtering approach in conjunction with an efficient codebook representation. 148-. (canceled)49. A video-encoder-device-implemented method for encoding a set of impairments-compensation-filter coefficient values for an encoded video frame , the method comprising:obtaining, by the video encoder device, at least two codebooks, the first of the at least two codebooks including a first plurality of sets of impairments-compensation-filter coefficient values suitable for configuring a video-decoder impairments compensation filter of a first filter size to process frames of encoded video having at least a first image characteristic, the second of the at least two codebooks including a second plurality of sets of impairments-compensation-filter coefficient values suitable for configuring a video-decoder impairments compensation filter of a second filter size to process frames of encoded video having at least a second image characteristic, wherein said second filter size is smaller than said first filter size;during encoding of an unencoded frame of video to generate an encoded bitstream, the video encoder device:differencing a block of unencoded video in the unencoded frame with a first impairments-compensated predication signal block from a motion compensation process to determine a residual block, transforming the residual block into a block of coefficients, quantizing the block of coefficients into a block of quantized coefficients, and encoding the block of quantized coefficients into the bitstream by an entropy coder;reverse quantizing and reverse transforming the block of quantized coefficients into a locally decoded block, assembling the locally decoded block into a locally assembled frame and deblock filtering the locally assembled frame into a deblocked frame;computing a target set of ...

Подробнее
05-11-2015 дата публикации

CONTENT ADAPTIVE ENTROPY CODING OF PARTITIONS DATA FOR NEXT GENERATION VIDEO

Номер: US20150319441A1
Принадлежит:

Techniques related to content adaptive entropy coding of partitions data are described. 157.-. (canceled)58. A computer-implemented method for video coding , comprising:loading input data defining partitions of picture portions of a video frame;determining a multi-level pattern for a first picture portion, wherein the multi-level pattern comprises a base pattern selected from a plurality of available patterns and at least one level one pattern selected from the plurality of available patterns for at least one non-terminating portion of the base pattern;determining termination bits indicating terminations of pattern levels of the picture portions of the video frame, wherein the termination bits comprise first termination bits associated with the base pattern of the multi-level pattern and the first picture portion;determining a first entropy coded codeword associated with the base pattern and a second entropy coded codeword associated with the level one pattern;entropy encoding the termination bits; andwriting the first entropy coded codeword, the second entropy coded codeword, and the entropy coded termination bits to a bitstream.59. The method of claim 58 , further comprising:determining a variable length code tables selection mode for at least one of the video frame, a slice of the video frame, or a sequence of video frames from at least one of a first variable length code tables selection mode where a single variable length coding table is used for determining entropy coded codewords for every picture portion of the video frame and a second variable length code tables selection mode where a picture portion-based variable length coding table is selected for each picture portion of the video frame from two or more available variable length coding tables and the selected picture portion-based variable length coding table is used for determining entropy coded codewords for the associated picture portion.60. The method of claim 58 , wherein a first variable length ...

Подробнее
05-11-2015 дата публикации

CONTENT ADAPTIVE BI-DIRECTIONAL OR FUNCTIONALLY PREDICTIVE MULTI-PASS PICTURES FOR HIGH EFFICIENCY NEXT GENERATION VIDEO CODING

Номер: US20150319442A1
Автор: Puri Atul
Принадлежит:

Techniques related to content adaptive bi-directional or functionally predictive multi-pass pictures for high efficiency next generation video coding. 134.-. (canceled)35. A computer-implemented method for video coding , comprising:receiving frames in an input video order;coding a first segment of a current frame in a first pass, wherein the first segment is associated with content shown on the frame, and wherein the current frame is a bi-directional (B-picture) or functional (F-picture) multi-pass picture wherein both the bi-directional and functional multi-pass picture are provided with the option to use at least one past reference frame, at least one future reference frame, or both, and wherein current, past, and future are relative to the input video order, and wherein the functional multi-pass picture has the option to use one or more modified reference frames that are modified by a morphing technique or a synthesis technique;coding at least a second segment of the current frame in at least a second pass and that is different than the first segment, wherein the first pass and second pass are performed at different times so that the use of reference frames during each pass occurs at different times, and wherein coding at least both the first and second segments being cooperatively used to achieve substantially a full quality coding of the substantially entire current frame; andwriting data associated with reconstructing the segments to an encoder bitstream either at the same time instance or at different time instances.36. The method of wherein the first and second segments have at least one different reference frame.37. The method of wherein coding for at least one other frame is at least partially performed in between the coding for the first and second segments of the current frame.38. The method of wherein the other frame is a P-picture.39. The method of wherein at least part of the other frame is used as a reference to code the second segment.40. The method ...

Подробнее
02-11-2017 дата публикации

CONTENT ADAPTIVE PREDICTION AND ENTROPY CODING OF MOTION VECTORS FOR NEXT GENERATION VIDEO

Номер: US20170318297A1
Принадлежит:

Techniques related to content adaptive prediction and entropy coding of motion vectors are described. 1. A video encoder comprising:a memory to store video data; and determine a pattern to provide a sequence of neighboring blocks of a block of the video data, wherein the pattern comprises a substantially spiral shape;', 'scan original motion vectors associated with the neighboring blocks according to the sequence for a first non-zero original motion vector; and', 'when the first non-zero original motion vector is determined, to generate a prediction motion vector associated with the block as the first non-zero original motion vector; or', 'when no non-zero original is determined, to generate the prediction motion vector associated with the block as a zero motion vector; and', 'to generate an output bitstream based in part on the prediction motion vector for the block., 'a processor coupled to the memory, the processor to2. The video encoder of claim 1 , wherein the pattern comprises a shape beginning at a first neighboring block of the block and progressing through neighboring blocks clockwise around the block.3. The video encoder of claim 1 , wherein the pattern comprises a shape beginning at a first neighboring block to the left of and aligned with the top of the block and progressing through neighboring blocks clockwise around the block.4. The video encoder of claim 1 , wherein the pattern comprises a shape beginning at a first neighboring block to the left of and aligned with the top of the block and progressing upwards to a second block to the left of and above the block claim 1 , then to the right along the top of the block to a third neighboring block to the right of and above the block claim 1 , then downwards to a fourth neighboring block to the right of and below the block claim 1 , then left to a fifth neighboring block to the left of and below the block.5. The video encoder of claim 1 , wherein the pattern comprises a shape beginning at a first ...

Подробнее
19-11-2015 дата публикации

METHOD AND APPARATUS FOR ENABLING TEXT EDITING IN A SCANNED DOCUMENT WHILE MAINTAINING FIDELITY OF THE APPEARANCE OF THE TEXT

Номер: US20150332491A1
Принадлежит: ADOBE SYSTEMS INCORPORATED

A computer implemented method and apparatus for enabling text editing in a scanned document while maintaining fidelity of appearance of the text. The method comprises creating a synthesized font comprising a plurality of characters using characters present in a scanned document; replacing the plurality of characters in the scanned document with characters from the plurality of characters from the synthesized font; and enabling editing of the scanned document wherein enabling editing comprises adding at least some characters from the plurality of characters of the synthesized font to the document for at least some characters added during editing.

Подробнее
01-11-2018 дата публикации

Fast color based and motion assisted segmentation of video into region-layers

Номер: US20180315196A1
Автор: Atul Puri, Daniel Socek
Принадлежит: Intel Corp

Techniques related to improved video frame segmentation based on color, motion, and texture are discussed. Such techniques may include segmenting a video frame of a video sequence based on only dominant color when the frame does not have a dominant motion nor a global motion in a high probability region of dominant color within the video frame.

Подробнее
01-11-2018 дата публикации

FAST MOTION BASED AND COLOR ASSISTED SEGMENTATION OF VIDEO INTO REGION LAYERS

Номер: US20180315199A1
Автор: Puri Atul, Socek Daniel
Принадлежит:

Techniques related to improved video frame segmentation based on motion, color, and texture are discussed. Such techniques may include segmenting a video frame of a video sequence based on differencing global motion or dominant motion from local motion in the video frame. 1. A computer implemented method for segmenting video frames into region-layers comprising:determining a local motion vector field based on a video frame and a reference video frame;determining affine or perspective parametric global motion parameters based on the video frame, the reference video frame, and the local motion vector field;mapping the global motion parameters to a global motion vector field;differencing the local motion vector field and the global motion vector field and mapping the difference to a global/local probability map, wherein the global/local probability map comprises a plurality of probabilities, each probability corresponding to a location within the video frame and indicating a probability the location comprises local motion; andproviding a regions mask corresponding to the video frame based on the global/local probability map, the regions mask indicating pixels of the video frame are included in one of a first or second region layer.2. The method of claim 1 , further comprising:generating a dominant motion for the video frame; andproviding the regions mask based at least in part on the dominant motion.3. The method of claim 1 , wherein determining the local motion vector field comprises:down sampling the video frame and the reference video frame;performing a local motion search of the down sampled reference video frame based on blocks of the down sampled video frame to generate a first motion vector field;up sampling the first motion vector field;performing a first block size refined motion search and a second block size refined motion search based on the up sampled motion vector field to generate a first block size motion vector field and a second block size motion ...

Подробнее
19-11-2015 дата публикации

CONTENT ADAPTIVE BACKGROUND FOREGROUND SEGMENTATION FOR VIDEO CODING

Номер: US20150334398A1
Автор: Puri Atul, Socek Daniel
Принадлежит:

Techniques related to content adaptive background-foreground segmentation for video coding.

Подробнее
23-11-2017 дата публикации

FAST AND ROBUST HUMAN SKIN TONE REGION DETECTION FOR IMPROVED VIDEO CODING

Номер: US20170339409A1
Автор: Puri Atul, Socek Daniel
Принадлежит:

Techniques related to improved video coding based on skin tone detection are discussed. Such techniques may include selecting from static skin probability histograms and/or a dynamic skin probability histogram based on a received video frame, generating a skin tone region based on the selected skin probability histogram and a face region of the video frame, and encoding the video frame based on the skin tone region to generate a coded bitstream. 1. A computer implemented method for performing video coding based on skin tone detection comprising:receiving a video frame in a first color format and in a second color format;selecting one of a plurality of static skin probability histograms based at least in part on the first color format video frame;generating a dynamic skin probability histogram based on the second color format video frame and a face region in the video frame;determining whether the dynamic skin probability histogram is valid or invalid;generating a skin tone region based on the dynamic skin probability histogram when the dynamic skin probability histogram is valid or based on the selected static skin probability histogram when the dynamic skin probability histogram is invalid; andencoding the video frame based at least in part on the skin tone region to generate a coded bitstream.2. The method of claim 1 , wherein the first color format frame is a Yrg color format frame and the second color frame is a YUV color format frame.3. The method of claim 2 , further comprising:receiving an input video frame in a YUV 4:2:0 format;downsampling the YUV 4:2:0 format video frame to a downsampled YUV 4:2:0 format video frame and converting the downsampled YUV 4:2:0 format video frame to a YUV 4:4:4 format video frame to generate the YUV color format frame; andconverting the YUV 4:4:4 format video frame to a Yrg 4:4:4 format to generate the Yrg color format frame.4. The method of claim 1 , wherein the plurality of static skin probability histograms consist of a ...

Подробнее
23-11-2017 дата публикации

FAST AND ROBUST FACE DETECTION, REGION EXTRACTION, AND TRACKING FOR IMPROVED VIDEO CODING

Номер: US20170339417A1
Автор: Puri Atul, Socek Daniel
Принадлежит:

Techniques related to improved video coding based on face detection, region extraction, and tracking are discussed. Such techniques may include performing a facial search of a video frame to determine candidate face regions in the video frame, testing the candidate face regions based on skin tone information to determine valid and invalid face regions, rejecting invalid face regions, and encoding the video frame based on valid face regions to generate a coded bitstream. 1. A computer implemented method for performing video coding based on face detection comprising:receiving a video frame;performing a multi-stage facial search of the video frame based on predetermined feature templates and a predetermined number of stages to determine a first candidate face region and a second candidate face region in the video frame;testing the first and second candidate face regions based on skin tone information to determine the first candidate face region is a valid face region and the second candidate face region is an invalid face region;rejecting the second candidate face region and outputting the first candidate face region; andencoding the video frame based at least in part on the first candidate face region being a valid face region to generate a coded bitstream.2. The method of claim 1 , wherein the skin tone information comprises a skin probability map.3. The method of claim 1 , wherein the video frame comprises one of a plurality of video frames of a video sequence claim 1 , the method further comprising:determining the video frame is a key frame of the video sequence, wherein said performing the multi-stage facial search is performed in response to the video frame being a key frame of the video sequence.4. The method of claim 1 , wherein the video frame comprises one of a plurality of video frames of a video sequence claim 1 , the method further comprising:determining the video frame is a key frame of the video sequence, wherein said testing the first and second ...

Подробнее
29-11-2018 дата публикации

CONTENT ADAPTIVE MOTION COMPENSATED TEMPORAL FILTERING FOR DENOISING OF NOISY VIDEO FOR EFFICIENT CODING

Номер: US20180343448A1
Принадлежит:

Techniques related to improved video denoising using content adaptive motion compensated temporal filtering are discussed. Such techniques may include determining whether a block of a video frame is motion compensable and, when the block is motion compensable, generating a denoised block corresponding to the block using the block itself and averaged reference blocks from two or more motion compensation reference frames. 1. A computer implemented method for reducing noise in video comprising:receiving a video frame having a target block and two or more second blocks that neighbor or overlap the target block;performing motion estimation and compensation to determine, for the target block, a first motion compensated block from a first reference frame and a second motion compensated block from a second reference frame and to determine, for the target block, a plurality of reference blocks, each reference block comprising a block of the first or second reference frame overlapping a translation of the target block to the first or second reference frame when motion compensation is performed on a corresponding one of the second blocks;determining whether the target block is motion compensable based at least on motion between the target block and the first and second motion compensated blocks and comparisons of the target block to the first and second motion compensated blocks;when the target block is motion compensable, generating a denoised block corresponding to the target block based on the target block, the first motion compensated block, the second motion compensated blocks, and the reference blocks; andwhen the target block is not motion compensable, generating the denoised block corresponding to the target block based on a spatial filtering of the target block; andoutputting a denoised video frame comprising the denoised block.2. The method of claim 1 , wherein each of the reference blocks comprises pixel values of the first or second reference frame overlapped by ...

Подробнее
17-12-2015 дата публикации

SYSTEM AND METHOD FOR HIGHLY CONTENT ADAPTIVE QUALITY RESTORATION FILTERING FOR VIDEO CODING

Номер: US20150365703A1
Принадлежит:

Techniques related to highly content adaptive quality restoration filtering for video coding. 1. A computer-implemented method of adaptive quality restoration filtering comprising:obtaining video data of reconstructed frames; dividing a reconstructed frame into a plurality of regions,', 'associating a region filter with each region wherein the region filter has a set of filter coefficients associated with pixel values within the corresponding region,', 'classifying blocks forming the reconstructed frame and into classifications that are associated with different gradients of pixel value within a block, and', 'associating a block filter for individual classifications and of sets of filter coefficients associated with pixel values of blocks assigned to the classification; and, 'generating a plurality of alternative block-region adaptation combinations for a reconstructed frame of the video data comprisingusing both region filters and block filters on the reconstructed frame to modify the pixel values of the reconstructed frame.2. The method of comprising using the region filters on the reconstructed frame except at openings formed at blocks on the reconstructed frame that are excluded from region filter calculations and are in one or more block classifications selected to be part of the combination claim 1 , wherein the block filters are used with block data at the openings.3. The method of comprising modifying the block-region arrangement in the combinations by forming iterations where each iteration of a combination has a different number of:(1) block classifications that share a filter, or(2) regions that share a filter, orany combination of (1) and (2); anddetermining which iteration of a plurality of the combinations results in the lowest rate distortion for use to modify the pixel values of the reconstructed frame.4. The method of wherein an initial arrangement of the combinations establish a maximum limitation as to the number of regions and block ...

Подробнее
24-12-2015 дата публикации

CONTENT ADAPTIVE BITRATE AND QUALITY CONTROL BY USING FRAME HIERARCHY SENSITIVE QUANTIZATION FOR HIGH EFFICIENCY NEXT GENERATION VIDEO CODING

Номер: US20150373328A1
Автор: Puri Atul, YENNETI SAIRAM
Принадлежит:

Techniques related to content adaptive bitrate and quality control by quantization for high efficiency next generation video coding are described. 1. A computer-implemented method for video coding , comprising:obtaining frames of pixel data in an input video order and associated with a multi-level hierarchy comprising a base level with at least I-pictures or P-pictures or both that are used as reference frames, at least one intermediate level with pictures that use frames on the base level as references, and a maximum level with pictures that are not used as reference frames, and that use the frames of the other levels as references, wherein P-pictures use past frames relative to the order as references, and wherein pictures on the maximum level are provided with the option to use past reference frames, future reference frames, or both; anddetermining a quantization parameter (Qp) for the frames depending at least on the level of the hierarchy of at least one current frame, and wherein each frame is given a rank associated with the level the frame is on.2. The method of comprising providing claim 1 , at least initially claim 1 , a smaller quantization parameter for the frames the closer the level of the frame is to the base level.3. The method of comprising providing claim 1 , at least initially claim 1 , the base level with the smallest quantization parameter for the frames relative to all the other levels.4. The method of comprising forming an at least initial Qp of the rank (0) P-picture from a predetermined relationship between Qp and a value of average temporal complexity of available complexities of future frames per target bitrate.5. The method of comprising using at least the initial Qp of the P-picture rank (0) frames to form the Qp of the rank (0) non-P-picture frames.6. The method of comprising using a predetermined mapping of P-picture rank (0) Qp value to rank (0) non-P-picture Qp value to form the rank (0) non-P-picture Qp value for a frame.7. The ...

Подробнее
27-12-2018 дата публикации

Content, psychovisual, region of interest, and persistence based adaptive quantization for video coding

Номер: US20180376153A1
Принадлежит: Intel Corp

Techniques related to improved video encoding including content, psychovisual, region of interest, and persistence based adaptive quantization are discussed. Such techniques may include generating block level rate distortion optimization Lagrange multipliers and block level quantization parameters for blocks of a picture to be encoded and determining coding parameters for the blocks based on a rate distortion optimization using the Lagrange multipliers and quantization parameters.

Подробнее
04-04-2003 дата публикации

Method of transmitting layered video-coded information

Номер: JP2003102008A
Принадлежит: AT&T Corp

(57)【要約】 【課題】 改善された、階層型ビデオ符号化データの伝 送方法を提供する。 【解決手段】 ビデオ符号化情報113は、ネットワー クからのフィードバックに基づいて判断される優先順位 により、ネットワーク上で送信される。ある態様では、 フィードバックは、現在そのネットワークに、優先順位 の高いトラフィックをさらに伝送するのに利用できる容 量があるかに関する情報の要求に対する応答が含まれ る。ある態様では、優先順位の高いデータを送信する許 可が与えられた場合、基本階層フレームの候補は、ネッ トワーク上で、基本階層フレームとして伝送され、優先 順位の高いデータを送信する許可が拒否された場合は、 ネットワーク上で、拡張階層フレームとして伝送され る。別の態様では、基本階層フレームの候補は、優先順 位の高いデータを送信する許可が拒否された場合、削除 される。

Подробнее
25-04-2000 дата публикации

Digital multi-view video compression with complexity and compatibility constraints

Номер: US6055012A
Принадлежит: Lucent Technologies Inc

In a system and method for transmitting and displaying multiple different views of a scene, three or more simultaneous scene signals, representing multiple different views of a scene, are provided by an appropriate camera arrangement to a spatial multiplexer. The size and resolution of the scene signals are reduced as necessary to combine the multiple scene signals into two super-view signals. The super-view signals are encoded using compression based on redundancies between the two super-views and then transmitted. A decoder receives the transmitted data signal and extracts the two super-view signals according to the inverse of the encoding operation. A spatial demultiplexer recovers the individual scene signals from the decoded super-view signals in accordance with a defined multiplexed order and arrangement. The scene signals are then interpolated as needed to restore the original resolution and size and subsequently displayed.

Подробнее
13-07-1993 дата публикации

Adaptive coding and decoding of frames and fields of video

Номер: US5227878A
Принадлежит: AT&T Bell Laboratories Inc

Improved compression of digital signals relating to high resolution video images is accomplished by an adaptive and selective coding of digital signals relating to frames and fields of the video images. Digital video input signals are analyzed and a coding type signal is produced in response to this analysis. This coding type signal may be used to adaptively control the operation of one or more types of circuitry which are used to compress digital video signals so that less bits, and slower bit rates, may be used to transmit high resolution video images without undue loss of quality. For example, the coding type signal may be used to improve motion compensated estimation techniques, quantization of transform coefficients, scanning of video data, and variable word length encoding of the data. The improved compression of digital video signals is useful for video conferencing applications and high definition television, among other things.

Подробнее
09-03-2004 дата публикации

Bidirectionally predicted pictures or video object planes for efficient and flexible video coding

Номер: US6704360B2
Принадлежит: AT&T Corp

A method is provided for decoding a bit stream representing an image that has been encoded The method includes the steps of: performing an entropy decoding of the bit stream to form a plurality of transform coefficents and a plurality of motion vectors; performing an inverse transformation on the plurality of transform coefficients to form a plurality of error blocks; determining a plurality of predicted blocks based on bidirectional motion estimation that employs the motion vectors, wherein the bidirectional motion estimation includes a direct prediction mode and a second prediction mode; and, adding the plurality of error blocks to the plurality of predicted blocks to form the image. The second prediction mode may include forward, backward, and interpolated prediction modes.

Подробнее
09-10-2007 дата публикации

Systems and methods for playing, browsing and interacting with MPEG-4 coded audio-visual objects

Номер: US7281200B2
Принадлежит: AT&T Corp

A number of novel configurations for MPEG-4 playback, browsing and user interaction are disclosed. MPEG-4 playback systems are not simple extensions of MPEG-2 playback systems, but, due to object based nature of MPEG-4, present new opportunities and challenges in synchronized management of independent coded objects as well as scene composition and presentation. Therefore, these configurations allow significantly new and enhanced multimedia services and systems. In addition, MPEG-4 aims for an advanced functionality, called Adaptive Audio Visual Session (AAVS) or MPEG-J. Adaptive Audio Visual Session (AAVS) (i.e., MPEG-AAVS, MPEG-Java or MPEG-J) requires, in addition to the definition of configurations, a definition of an application programming interface (API) and its organization into Java packages. Also disclosed are concepts leading to definition of such a framework.

Подробнее
22-01-2004 дата публикации

Method and apparatus for variable accuracy inter-picture timing specification for digital video encoding

Номер: CA2491741A1

A method and apparatus for variable accuracy inter-picture timing specification for digital video encoding is disclosed. Specificallly the present invention discloses a system that allows the relative timing of near by video pictures to be encoded in a very efficient manner. In one embodiment, the display time difference between a current video picture (105) and a near by video picture is determined. The display time difference is then encoded (18 0) into a digital representation of the video picture. In a preferred embodimen t, the nearby video picture is the most recently transmitted stored picture. Fo r coding efficiency, the display time difference may be encoded using a variab le length coding system or arithmetic coding. In an alternate embodiment, the display time difference is encoded as a power of two to reduce the number of bits transmitted.

Подробнее
17-07-2018 дата публикации

System, method and computer-readable medium for encoding a signal into macroblocks

Номер: US10027962B2
Принадлежит: AT&T Intellectual Property II LP

A quantizer and dequantizer for use in a video coding system that applies non linear, piece-wise linear scaling functions to video information signals based on a value of a variable quantization parameter. The quantizer and dequantizer apply different non linear, piece-wise linear scaling functions to a DC luminance signal, a DC chrominance signal and an AC chrominance signal. A code for reporting updates of the value of the quantization parameter is interpreted to require larger changes when the quantization parameter initially is large and smaller changes when the quantization parameter initially is small.

Подробнее
20-08-2002 дата публикации

Fixed or adaptive deinterleaved transform coding for image coding and intra coding of video

Номер: CA2214663C

A coding strategy efficiently codes intra (macroblocks, regions, pictures, VOP) data. This strategy uses two basis approaches, a Fixed Deinterleaved Transform Coding approach, and an Adaptive Deinterleaved Transform Coding approach. Furthermore, within each approach, two types of coders are developed. One coder operates on an entire picture or VOPs and the other coder operates on small local regions. Using coders and decoders of the present invention, efficient coding at a range of complexities becomes possible, allowing suitable tradeoffs for a variety of applications.

Подробнее
18-12-2002 дата публикации

Method of transmitting layered video-coded information

Номер: CA2390715A1
Принадлежит: AT&T Corp

Video-coded information is transmitted over a network at a priority level that is determined based on feedback from the network. In an embodiment, the feedback comprises a response to a request for information on whether the network currently has the available capacity to transmit additional high priority traffic. In an embodiment, a candidate base layer frame is transmitted over the network as a base layer frame if permission to send high priority data was granted and is transmitted over the network as an enhancement layer frame if permission to send high priority data was denied. In a further embodiment, the candidate base layer frame is deleted if permission to send high priority data was denied.

Подробнее