Настройки

Укажите год
-

Небесная энциклопедия

Космические корабли и станции, автоматические КА и методы их проектирования, бортовые комплексы управления, системы и средства жизнеобеспечения, особенности технологии производства ракетно-космических систем

Подробнее
-

Мониторинг СМИ

Мониторинг СМИ и социальных сетей. Сканирование интернета, новостных сайтов, специализированных контентных площадок на базе мессенджеров. Гибкие настройки фильтров и первоначальных источников.

Подробнее

Форма поиска

Поддерживает ввод нескольких поисковых фраз (по одной на строку). При поиске обеспечивает поддержку морфологии русского и английского языка
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Укажите год
Укажите год

Применить Всего найдено 332. Отображено 163.
02-04-2015 дата публикации

REMOVING CACHED DATA

Номер: US20150095587A1
Принадлежит:

Embodiments of the present invention provide a method and apparatus for removing cached data. The method comprises determining activeness of a plurality of divided lists; ranking the plurality of divided lists according to the determined activeness of the plurality of divided lists. The method comprises removing a predetermined amount of cached data from the plurality of divided lists according to the ranking result when the used capacity in the cache area reaches a predetermined threshold. Through embodiments of the present invention, the activeness of each divided list may be used to wholly measure the heat of access to the cached data included by each divided list, and upon removal, the cached data with lower heat of access in the whole system can be removed and the cached data with higher heat of access in the whole system can be retained so as to improve the read/write rate of the system. 1. A method for removing cached data , comprising:determining an activeness associated with a plurality of divided lists;ranking the plurality of divided lists according to the activeness of the plurality of divided lists; andremoving a predetermined amount of cached data from the plurality of divided lists in accordance with the ranking of the plurality of divided lists when the used capacity in the cache area reaches a predetermined threshold.2. The method according to claim 1 , wherein determining an activeness associated with the plurality of divided lists comprises:determining a recently overall accessed timestamp for each of the plurality of divided lists, wherein the recently overall accessed timestamp indicates a time when all cached data included in each of the plurality of divided lists are accessed most recently; andwherein ranking the plurality of divided lists according to the activeness of the plurality of divided lists comprises:ranking the plurality of divided lists according to the recently overall accessed timestamps.3. The method according to claim 2 , ...

Подробнее
31-12-2019 дата публикации

Frame packing and unpacking higher-resolution chroma sampling formats

Номер: US0010523953B2

Video frames of a higher-resolution chroma sampling format such as YUV 4:4:4 are packed into video frames of a lower-resolution chroma sampling format such as YUV 4:2:0 for purposes of video encoding. For example, sample values for a frame in YUV 4:4:4 format are packed into two frames in YUV 4:2:0 format. After decoding, the video frames of the lower-resolution chroma sampling format can be unpacked to reconstruct the video frames of the higher-resolution chroma sampling format. In this way, available encoders and decoders operating at the lower-resolution chroma sampling format can be used, while still retaining higher resolution chroma information. In example implementations, frames in YUV 4:4:4 format are packed into frames in YUV 4:2:0 format such that geometric correspondence is maintained between Y, U and V components for the frames in YUV 4:2:0 format.

Подробнее
14-11-2017 дата публикации

Hardware-accelerated decoding of scalable video bitstreams

Номер: US0009819949B2

In various respects, hardware-accelerated decoding is adapted for decoding of video that has been encoded using scalable video coding. For example, for a given picture to be decoded, a host decoder determines whether a corresponding base picture will be stored for use as a reference picture. If so, the host decoder directs decoding with an accelerator such that the some of the same decoding operations can be used for the given picture and the reference base picture. Or, as another example, the host decoder groups encoded data associated with a given layer representation in buffers. The host decoder provides the encoded data for the layer to the accelerator. The host decoder repeats the process layer-after-layer in the order that layers appear in the bitstream, according to a defined call pattern for an acceleration interface, which helps the accelerator determine the layers with which buffers are associated.

Подробнее
31-03-2016 дата публикации

COUPLING SAMPLE METADATA WITH MEDIA SAMPLES

Номер: US20160094847A1
Принадлежит: Microsoft Corporation

Innovations in the area of sample metadata processing can help a media playback tool avoid loss of synchronization between sample metadata and media samples. For example, a media playback tool identifies encoded data and sample metadata for a current media sample, then couples the sample metadata with the current media sample. The media playback tool provides the sample metadata and encoded data for the current media sample to a media decoder, which maintains the coupling between at least one element of the sample metadata and the current media sample during at least one stage of decoding, even when the current media sample is dropped, delayed, split, or repeated. For example, the media playback tool can determine whether to drop the current media sample and, if the current media sample is dropped, also drop the sample metadata that is coupled with the current media sample. 1. One or more computer-readable media storing computer-executable instructions for causing a computing system programmed thereby to perform:identifying, in a media elementary bit stream, encoded data for a current media sample;identifying, from outside the media elementary bit stream, sample metadata for the current media sample;coupling the sample metadata for the current media sample with the current media sample; andconcurrently providing the sample metadata for the current media sample and the encoded data for the current media sample to a media decoder.2. The one or more computer-readable media of claim 1 , further storing computer-executable instructions for causing the computing system to perform:decoding the encoded data for the current media sample to produce a reconstructed version of the current media sample; andprocessing the reconstructed version of the current media sample for output, wherein at least one sample metadata element of the sample metadata for the current media sample remains coupled with the current media sample during the decoding.3. The one or more computer-readable ...

Подробнее
09-04-2019 дата публикации

Generic platform video image stabilization

Номер: US0010257421B2

Video image stabilization provides better performance on a generic platform for computing devices by evaluating available multimedia digital signal processing components, and selecting the available components to utilize according to a hierarchy structure for video stabilization performance for processing parts of the video stabilization. The video stabilization has improved motion vector estimation that employs refinement motion vector searching according to a pyramid block structure relationship starting from a downsampled resolution version of the video frames. The video stabilization also improves global motion transform estimation by performing a random sample consensus approach for processing the local motion vectors, and selection criteria for motion vector reliability. The video stabilization achieves the removal of hand shakiness smoothly by real-time one-pass or off-line two-pass temporal smoothing with error detection and correction.

Подробнее
21-03-2024 дата публикации

GENERATING BOUNDARY POINTS FOR MEDIA CONTENT

Номер: US20240098346A1
Принадлежит:

Systems and methods described herein provide for novel boundary generation features for interleaving additional content into media content. Media content may be received which includes a video and audio portion. An unencrypted encode of the video portion may be generated. A first set of time stamps for the video portion may be generated using a computer vision algorithm. A second set of time stamps for the video portion may be generated for identifying IDR frames using a first algorithm. A third set of time stamps may be generated to serve as boundaries for interleaving additional content into the media content based on a priority algorithm that uses the first set of time stamps and the second set of time stamps. The video portion may be encoded using the third set of time stamps to determine the IDR frames for the media content.

Подробнее
27-12-2018 дата публикации

PARALLEL COMPUTE OFFLOAD TO DATABASE ACCELERATOR

Номер: US20180373760A1
Принадлежит: Xilinx, Inc.

Embodiments herein describe techniques for preparing and executing tasks related to a database query in a database accelerator. In one embodiment, the database accelerator is separate from a host CPU. A database management system (DBMS) can offload tasks corresponding to a database query to the database accelerator. The DBMS can request data from the database relevant to the query and then convert that data into one or more data blocks that are suitable for processing by the database accelerator. In one embodiment, the database accelerator contains individual hardware processing units (PUs) that can process data in parallel or concurrently. In order to process the data concurrently, the data block includes individual PU data blocks that are each intended for a respective PU in the database accelerator.

Подробнее
29-08-2017 дата публикации

Encoding/decoding of high chroma resolution details

Номер: US0009749646B2

Innovations in encoding and decoding of video pictures in a high-resolution chroma sampling format (such as YUV 4:4:4) using a video encoder and decoder operating on coded pictures in a low-resolution chroma sampling format (such as YUV 4:2:0) are presented. For example, high chroma resolution details are selectively encoded on a region-by-region basis. Or, as another example, coded pictures that contain sample values for low chroma resolution versions of input pictures and coded pictures that contain sample values for high chroma resolution details of the input pictures are encoded as separate sub-sequences of a single sequence of coded pictures, which can facilitate effective motion compensation. In this way, available encoders and decoders operating on coded pictures in the low-resolution chroma sampling format can be effectively used to provide high chroma resolution details.

Подробнее
19-11-2019 дата публикации

Rendition switch indicator

Номер: US0010484701B1

Methods to switch between renditions of a video stream are generally described. In some examples, the methods may include encoding a video stream at a first image quality in a first rendition and a second, lower image quality in a second rendition. The methods may further include sending the first rendition to a recipient computing device. The methods may include receiving a request to switch from the first rendition to the second rendition. The methods may include determining that first indicator data of a first inter-coded frame indicates that the video stream can be switched to a lower image quality rendition at the first inter-coded frame. In some examples, the methods may further include sending the second rendition to the recipient computing device.

Подробнее
26-11-2019 дата публикации

Category-prefixed data batching of coded media data in multiple categories

Номер: US0010489426B2

Innovations for category-prefixed data batching (“CPDB”) of entropy-coded data or other payload data for coded media data, as well as innovations for corresponding recovery of the entropy-coded data (or other payload data) formatted with CPDB. The CPDB can be used in conjunction with coding/decoding for video content, image content, audio content or another type of content. For example, after receiving coded media data in multiple categories from encoding units, a formatting tool formats payload data with CPDB, generating a batch prefix for a batch of the CPDB-formatted payload data. The batch prefix includes a category identifier and a data quantity indicator. The formatting tool outputs the CPDB-formatted payload data to a bitstream. At the decoder side, a formatting tool receives the CPDB-formatted payload data in a bitstream, recovers the payload data from the CPDB-formatted payload data, and outputs the payload data (e.g., to decoding units).

Подробнее
20-03-2012 дата публикации

Pop-up drain stopper linkage assembly

Номер: US0008136179B2

A pop-up drain stopper linkage assembly includes a lift rod, a connecting bar, a pivot rod and a drain stopper. The bottom end of the lift rod forms an engagement part. The upper end of the connecting bar has an engagement groove for the engagement part of the lift rod being engaged and fixed, and the connecting bar has a plurality of holes spaced apart a distance away from the engagement groove. The second end of the pivot rod connects to the drain stopper and a section adjacent the first end has a plurality of fixed portions for tying in with the hole of the connecting bar, and each of the fixed portions and any one of the holes are capable of being passing through and positioning with each other. Thereby, the pop-up drain stopper linkage assembly can be quickly and conveniently assembled with reliable linking effect.

Подробнее
19-01-2021 дата публикации

Techniques for annotating media content

Номер: US0010897658B1
Принадлежит: Amazon Technologies, Inc., AMAZON TECH INC

Methods and apparatus are described for automating aspects of the annotation of a media presentation. Events are identified that relate to entities associated with the scenes of the media presentation. These events are time coded relative to the media timeline of the media presentation and might represent, for example, the appearance of a particular cast member or playback of a particular music track. The video frames of the media presentation are processed to identify visually similar intervals that may serve as or be used to identify contexts (e.g., scenes) within the media presentation. Relationships between the event data and the visually similar intervals or contexts are used to identify portions of the media presentation during which the occurrence of additional meaningful events is unlikely. This information may be surfaced to a human operator tasked with annotating the content as an indication that part of the media presentation may be skipped.

Подробнее
02-03-2017 дата публикации

ACCELERATION INTERFACE FOR VIDEO DECODING

Номер: US20170064313A1
Принадлежит:

A host decoder and accelerator communicate across an acceleration interface. The host decoder receives at least part of a bitstream for video, and it manages certain decoding operations of the accelerator across the acceleration interface. The accelerator receives data from the host decoder across the acceleration interface, then performs decoding operations. For a given frame, settings based on an uncompressed frame header can be transferred in a different buffer of the acceleration interface than a compressed frame header and compressed frame data. Among other features, the host decoder can assign settings used by the accelerator that override values of bitstream syntax elements, can assign surface index values used by the accelerator to update reference frame buffers, and can handle skipped frames without invoking the accelerator. Among other features, the accelerator can use surface index values to update reference frame buffers, and can handle changes in spatial resolution at non-key frames. 1. In a computer system that includes a host decoder and an accelerator in communication with the host decoder across an acceleration interface , a method comprising:at the host decoder, receiving at least part of a bitstream of encoded data for video; and parsing, from the at least part of the bitstream, an uncompressed frame header for a current frame of the video;', 'transferring, to the accelerator across the acceleration interface, data based at least in part on the uncompressed frame header in a first buffer; and', 'transferring, to the accelerator across the acceleration interface, a compressed frame header for the current frame in a second buffer different than the first buffer., 'with the host decoder, managing at least some video decoding operations of the accelerator across the acceleration interface, including2. The method of wherein the data based at least in part on the uncompressed frame header includes one or more of:settings for decoding tools that apply ...

Подробнее
18-11-2021 дата публикации

SUPPLEMENTAL ENHANCEMENT INFORMATION INCLUDING CONFIDENCE LEVEL AND MIXED CONTENT INFORMATION

Номер: US20210360264A1
Принадлежит: Microsoft Technology Licensing, LLC

This application relates to video encoding and decoding, and specifically to tools and techniques for using and providing supplemental enhancement information in bitstreams. Among other things, the detailed description presents innovations for bitstreams having supplemental enhancement information (SEI). In particular embodiments, the SEI message includes picture source data (e.g., data indicating whether the associated picture is a progressive scan picture or an interlaced scan picture and/or data indicating whether the associated picture is a duplicate picture). The SEI message can also express a confidence level of the encoder's relative confidence in the accuracy of this picture source data. A decoder can use the confidence level indication to determine whether the decoder should separately identify the picture as progressive or interlaced and/or a duplicate picture or honor the picture source scanning information in the SEI as it is. 120.-. (canceled)21. A method performed by an encoder device , the method comprising:determining a first flag and a second flag for identifying source scan type of pictures in a sequence, the first flag and the second flag collectively and exclusively indicating one of the following unique states for the sequence of pictures: a state indicating that the source scan type of the pictures in the sequence is interlaced, a state indicating that the source scan type of the pictures in the sequence is progressive, a state indicating that the source scan type of the pictures in the sequence is unknown, and a state indicating that the source scan type is independently indicated for each picture of the pictures in the sequence by a value of a picture-level syntax element that is to be signaled as part of an SEI message or to be inferred, wherein the first flag indicates whether the source scan type of the pictures is interlaced and the second flag is different from the first flag, is a separate syntax element from the first flag, and ...

Подробнее
06-08-2015 дата публикации

Data Unit Identification for Compressed Video Streams

Номер: US20150222917A1
Принадлежит: Microsoft Corporation

Data unit identification for compressed video streams is described. In one or more implementations, a compressed video stream is received at a computing device and a determination is made as to whether prior knowledge is available that relates to the compressed video stream. Responsive to the determination that prior knowledge is available that relates to the compressed video stream, the prior knowledge is employed by the computing device to perform data unit identification for the compressed video stream. In one or more implementations, SIMD instructions are utilized to perform pattern (0x00 00) search in a batch mode. Then a byte-by-byte search is performed to confirm whether the pattern, 0x00 00, found is part of a start code, 0x00 00 01, or not. 1. A method comprising:receiving a compressed video stream at a computing device;determining whether prior knowledge is available that relates to the compressed video stream; andresponsive to the determination that there prior knowledge is available that relates to the compressed video stream, employing the prior knowledge by the computing device to perform data unit identification for the compressed video stream.2. A method as described in claim 1 , wherein the employing of the prior knowledge causes a byte-by-byte search to perform the data unit identification to be skipped for at least a part of the compressed video stream.3. A method as described in claim 1 , wherein the employing is performed such that a byte-by-byte search is performed to perform the data unit identification until a frame is identified from a data unit configured as a frame data unit after which a remaining portion of the frame is skipped and the byte-by-byte search begins after the remaining portion for a subsequent frame.4. A method as described in claim 1 , wherein the prior knowledge is based on identification of an encoding format of the compressed video stream or a source of the compressed video stream.5. A method as described in claim 1 , ...

Подробнее
11-11-2014 дата публикации

Low-latency video decoding

Номер: US0008885729B2

Techniques and tools for reducing latency in video decoding for real-time communication applications that emphasize low delay. For example, a tool such as a video decoder selects a low-latency decoding mode. Based on the selected decoding mode, the tool adjusts output timing determination, picture boundary detection, number of pictures in flight and/or jitter buffer utilization. For low-latency decoding, the tool can use a frame count syntax element to set initial output delay for a decoded picture buffer, and the tool can use auxiliary delimiter syntax elements to detect picture boundaries. To further reduce delay in low-latency decoding, the tool can reduce number of pictures in flight for multi-threaded decoding and reduce or remove jitter buffers. The tool receives encoded data, performs decoding according to the selected decoding mode to reconstruct pictures, and outputs the pictures for display.

Подробнее
04-05-2017 дата публикации

TRANSFORMING VIDEO BIT STREAMS FOR PARALLEL PROCESSING

Номер: US20170127072A1
Принадлежит:

Aspects extend to methods, systems, and computer program products for transforming video bit streams for parallel decoding. Aspects of the invention can be used to break segment coding structure limitations in video bit streams. Aspects can be used to maximize parallelization of video decoding tasks, including motion compensation processing, to more efficiently utilize multi-core and multi-processor computer systems. Multiple portions of intra-segment data can be processed in parallel to speed up single frame processing. Video communication latency and memory requirements are also reduced. 1. A system , the system comprising:a processor;system memory; receive a frame from a video bit stream, the frame partitioned into one or more segments;', decode a first data portion and a second data portion from the segment, the first data portion having first parameters defining how to decode and visually present the first data portion and the second data portion having second parameters defining how to decode and visually present the second data portion; and', 'determine that values for the second parameters are dependent on values for the first parameters;, 'for at least one segment from among the one or more segments, calculate new values for the second parameters based on the values for the second parameters and the values for the first parameters; and', 'reconstruct the frame, including using the new values for the second parameters to define how to visually present the second data portion such that the second data portion can be processed in parallel with the first data portion., 'form a reconstructed frame, the reconstructed frame breaking the dependency of the values for the second parameters on the values for the first parameters including], 'a decoder, using the processor, configured to2. The system of claim 1 , wherein a decoder claim 1 , using the processor claim 1 , being configured to reconstruct the frame comprises a decoder claim 1 , using the processor claim 1 ...

Подробнее
06-12-2016 дата публикации

Single pass/single copy network abstraction layer unit parser

Номер: US0009516147B2

Technologies for a single-pass/single copy network abstraction layer unit (“NALU”) parser. Such a NALU parser typically reuses source and/or destination buffers, optionally changes endianess of NALU data, optionally processes emulation prevention codes, and optionally processes parameters in slice NALUs, all as part of a single pass/single copy process. The disclosed NALU parser technologies are further suitable for hardware implementation, software implementation, or any combination of the two.

Подробнее
04-04-2024 дата публикации

SEAMLESS INSERTION OF MODIFIED MEDIA CONTENT

Номер: US20240112703A1
Принадлежит:

Disclosed are various embodiments for seamless insertion of modified media content. In one embodiment, a modified portion of video content is received. The modified portion has a start cue point and an end cue point that are set relative to a modification to the video content to indicate respectively when the modification approximately begins and ends compared to the video content. A video coding associated with the video content is identified. The start cue point and/or the end cue point are dynamically adjusted to align the modified portion with the video content based at least in part on the video coding.

Подробнее
11-06-2020 дата публикации

JUST AFTER BROADCAST MEDIA CONTENT

Номер: US20200186848A1
Принадлежит:

Methods and apparatus are described for making broadcast content available as an on-demand asset soon after all of the video fragments of the broadcast content have been made available. As the video fragments of the broadcast content are made available, they are requested and archived. When all of the fragments for the duration of the broadcast content are available (e.g., a live event ends), a VOD-style manifest is generated and the archived fragments are made available for downloading or streaming using the VOD-style manifest. 1. A computer-implemented method , comprising:generating media content representing a live event;successively requesting portions of first manifest information representing the media content from a packager associated with the media content, the first manifest information being configured to support streaming of the media content while the live event is ongoing;providing the first manifest information to a first client device;streaming the media content to the first client device;requesting video fragments of the media content using the first manifest information while the live event is ongoing;storing the video fragments;determining that the live event has completed;requesting a final portion of the first manifest information from the packager, the final portion of the first manifest information representing a full playback duration of the media content;generating second manifest information from the final portion of the first manifest information, the second manifest information being static and specifying a start time and an end time of the media content, the second manifest information being configured to support on-demand access to copies of the video fragments;providing the second manifest information to a second client device; andproviding the copies of the video fragments to the second client device.2. The method of claim 1 , wherein the video fragments correspond to a plurality of streaming protocols including HTTP Live Streaming ( ...

Подробнее
03-04-2014 дата публикации

FRAME PACKING AND UNPACKING HIGHER-RESOLUTION CHROMA SAMPLING FORMATS

Номер: US20140092998A1
Принадлежит: MICROSOFT CORPORATION

Video frames of a higher-resolution chroma sampling format such as YUV 4:4:4 are packed into video frames of a lower-resolution chroma sampling format such as YUV 4:2:0 for purposes of video encoding. For example, sample values for a frame in YUV 4:4:4 format are packed into two frames in YUV 4:2:0 format. After decoding, the video frames of the lower-resolution chroma sampling format can be unpacked to reconstruct the video frames of the higher-resolution chroma sampling format. In this way, available encoders and decoders operating at the lower-resolution chroma sampling format can be used, while still retaining higher resolution chroma information. In example implementations, frames in YUV 4:4:4 format are packed into frames in YUV 4:2:0 format such that geometric correspondence is maintained between Y, U and V components for the frames in YUV 4:2:0 format.

Подробнее
24-01-2017 дата публикации

Neighbor determination in video decoding

Номер: US0009554134B2
Принадлежит: Microsoft Technology Licensing, LLC

Video decoding innovations for multithreading implementations and graphics processor unit (“GPU”) implementations are described. For example, for multithreaded decoding, a decoder uses innovations in the areas of layered data structures, picture extent discovery, a picture command queue, and/or task scheduling for multithreading. Or, for a GPU implementation, a decoder uses innovations in the areas of inverse transforms, inverse quantization, fractional interpolation, intra prediction using waves, loop filtering using waves, memory usage and/or performance-adaptive loop filtering. Innovations are also described in the areas of error handling and recovery, determination of neighbor availability for operations such as context modeling and intra prediction, CABAC decoding, computation of collocated information for direct mode macroblocks in B slices, reduction of memory consumption, implementation of trick play modes, and picture dropping for quality adjustment.

Подробнее
01-01-2009 дата публикации

Innovations in video decoder implementations

Номер: US20090003447A1
Принадлежит: Microsoft Corporation

Video decoding innovations for multithreading implementations and graphics processor unit (“GPU”) implementations are described. For example, for multithreaded decoding, a decoder uses innovations in the areas of layered data structures, picture extent discovery, a picture command queue, and/or task scheduling for multithreading. Or, for a GPU implementation, a decoder uses innovations in the areas of inverse transforms, inverse quantization, fractional interpolation, intra prediction using waves, loop filtering using waves, memory usage and/or performance-adaptive loop filtering. Innovations are also described in the areas of error handling and recovery, determination of neighbor availability for operations such as context modeling and intra prediction, CABAC decoding, computation of collocated information for direct mode macroblocks in B slices, reduction of memory consumption, implementation of trick play modes, and picture dropping for quality adjustment.

Подробнее
15-11-2018 дата публикации

CATEGORY-PREFIXED DATA BATCHING OF CODED MEDIA DATA IN MULTIPLE CATEGORIES

Номер: US20180329978A1
Принадлежит: Microsoft Technology Licensing, LLC

Innovations for category-prefixed data batching (“CPDB”) of entropy-coded data or other payload data for coded media data, as well as innovations for corresponding recovery of the entropy-coded data (or other payload data) formatted with CPDB. The CPDB can be used in conjunction with coding/decoding for video content, image content, audio content or another type of content. For example, after receiving coded media data in multiple categories from encoding units, a formatting tool formats payload data with CPDB, generating a batch prefix for a batch of the CPDB-formatted payload data. The batch prefix includes a category identifier and a data quantity indicator. The formatting tool outputs the CPDB-formatted payload data to a bitstream. At the decoder side, a formatting tool receives the CPDB-formatted payload data in a bitstream, recovers the payload data from the CPDB-formatted payload data, and outputs the payload data (e.g., to decoding units).

Подробнее
10-01-2023 дата публикации

Just after broadcast media content

Номер: US0011553218B2
Принадлежит: Amazon Technologies, Inc.

Methods and apparatus are described for making broadcast content available as an on-demand asset soon after all of the video fragments of the broadcast content have been made available. As the video fragments of the broadcast content are made available, they are requested and archived. When all of the fragments for the duration of the broadcast content are available (e.g., a live event ends), a VOD-style manifest is generated and the archived fragments are made available for downloading or streaming using the VOD-style manifest.

Подробнее
09-10-2014 дата публикации

SYNTAX-AWARE MANIPULATION OF MEDIA FILES IN A CONTAINER FORMAT

Номер: US20140304303A1
Принадлежит:

A container format processing tool performs syntax-aware manipulation of hierarchically organized syntax elements defined according to a container format in a media file. For example, a container format verifier checks conformance of a media file to a container format, which can help ensure interoperability between diverse sources of media content and playback equipment. Conformance verification can include verification of individual syntax elements, cross-verification, verification that any mandatory syntax elements are present and/or verification of synchronization. Or, a container format “fuzzer” simulates corruption of a media file, which can help test the resilience of playback equipment to errors in the media files. The container format fuzzer can simulate random bit flipping errors, an audio recording failure or incorrect termination of recording. Or, a container format editor can otherwise edit the media file in the container format. 1. One or more computer-readable media storing computer-executable instructions for causing a computing system programmed thereby to perform a method comprising:receiving a media file in a container format for a presentation that includes one or more of audio content, image content and video content, wherein the container format is tree-structured such that the media file includes hierarchically organized syntax elements defined according to the container format; andperforming syntax-aware manipulation of at least some of the hierarchically organized syntax elements defined according to the container format in the media file.2. The one or more computer-readable media of wherein the performing syntax-aware manipulation includes verifying conformance of the media file to the container format.3. The one or more computer-readable media of wherein the verifying includes single-element verification for the at least some of the hierarchically organized syntax elements defined according to the container format claim 2 , including claim ...

Подробнее
04-05-2017 дата публикации

VIDEO BIT STREAM DECODING

Номер: US20170127074A1
Принадлежит:

Aspects extend to methods, systems, and computer program products for video bit stream decoding. Aspects include flexible definition and detection of surface alignment requirements for decoding hardware. Surface alignment requirements can be handled by render cropping (e.g., cropping at a video output device), through adjustment and modification of original syntax values in a video bit stream and relaxed media type negotiation in a software (host) decoder. Resolution changes can be hidden with the aligned surface allocation when applicable. Performance can be improved and power consumption reduced by using hidden resolution changes. 1. A system , the system comprising:decode hardware for decoding video bit streams, the decode hardware having internal surface alignment requirements;a display device for visually presenting video data, the display device capable of cropping padding from video data; and querying the decode hardware for the internal surface alignment requirements;', 'determining a code block from a video bit stream is to be input to the decode hardware, the code block having an actual resolution that does not match the surface alignment requirements; and', 'modifying syntax values for the code block, the modified syntax values indicating that the code block has a purported resolution matching the surface alignment requirements, the purported resolution being larger than the actual resolution., 'a software decoder, using a processor, for2. The system of claim 1 , further comprising the decode hardware:decoding the code block into video data at the purported resolution, the video data aligned in accordance with the surface alignment requirements, the video data at the purported resolution including video data at the actual resolution and padding.3. The system of claim 2 , further comprising the software decoder claim 2 , using the processor claim 2 , for allocating an output buffer in accordance with the surface alignment requirements.4. The system of ...

Подробнее
07-03-2017 дата публикации

Lossy data stream decoder

Номер: US0009590952B2

Lossy data stream decoder techniques are described herein. In response to a request for decoded content from a consuming application, a decoder may validate headers and identify portions of the data that are considered pertinent to the request. The decoder then performs lossy extraction to form incomplete data that is provided to the consuming application in response to the request. The full data for the data stream is not exposed to the consuming application or other downstream components. In this way, the consuming application is provided data sufficient to perform requested graphics processing and resource management operations, while at the same time the risk of piracy is mitigated since the consuming application is unable to get a full version of the data in the clear and the data have been validated by the decoder.

Подробнее
08-12-2016 дата публикации

RATE CONTROLLER FOR REAL-TIME ENCODING AND TRANSMISSION

Номер: US20160360206A1
Принадлежит:

In response to a scene change being detected in screen content, a rate controller instructs a video encoder to generate an intraframe compressed image. The rate controller computes a target size for compressed image data using a function based on a maximum compressed size for a single image, i.e., without buffers for additional image data. For a number of images processed after detection of the scene change, this target size is computed and used to control the video encoder. After this number of images is processed, the rate controller can resume to a prior mode of operation. Such rate control reduces latency in encoding and transmission of screen content, which improves user perception of responsiveness of a host computer, such as for interactive video applications.

Подробнее
17-11-2020 дата публикации

Noisy media content encoding

Номер: US0010841620B1
Принадлежит: Amazon Technologies, Inc., AMAZON TECH INC

Techniques are described for encoding noisy media content. A residual coefficient matrix representing differences in image content between a portion of a target image frame and portions of one or more reference frames can include noise within a high frequency band. Some of the noise can be removed by removing isolated residual coefficients. Some of the noise can be reduced by attenuating their values selectively.

Подробнее
14-04-2005 дата публикации

Overlapped block motion compensation for variable size blocks in the context of MCTF scalable video coders

Номер: US20050078755A1
Принадлежит:

A method, computer program product, and computer system for processing video frames. A current frame is divided into M blocks that include at least two differently sized blocks. M is at least 9. Each block in the current frame is classified as being a motion block or an I-BLOCK. Overlapped block motion compensation (OBMC) is performed on each block of the M blocks according to a predetermined scan order. The block on which OBMC is being performed is denoted as a self block. The OBMC is performed on the self block with respect to its neighbor blocks. The neighbor blocks consist of nearest neighbor blocks of the self block. Performing OBMC on the self block includes generating a weighting window for the self block and for each of its neighbor blocks.

Подробнее
24-10-2019 дата публикации

SERVER-SIDE INSERTION OF MEDIA FRAGMENTS

Номер: US20190327504A1
Принадлежит:

Techniques are described for providing media presentations that include content originating from multiple sources in ways that are effectively transparent to end user devices. Manifest data provided to an end user device include a key encoded in the URL for each of the content fragments. The key encodes one or more interstitial periods of secondary content within the overall presentation of primary content. When a media server receives a content request from the end user device, the media server determines from the key encoded in the URL and the range of content requested whether the request corresponds to the primary content or the secondary content.

Подробнее
19-04-2012 дата публикации

SMOOTH REWIND MEDIA PLAYBACK

Номер: US20120093489A1
Принадлежит: MICROSOFT CORPORATION

Systems and methods for smooth rewind playback of streamed media are provided. The media includes relatively-encoded frames and independently-encoded frames. The method includes receiving a rewind request indicating a rewind speed for rewind playback of the media, selectively dropping relatively-encoded frame(s) based on a receipt constraint and a decoding constraint to form a subset of the media, and receiving frames of the subset. The method further includes selecting, in a reverse order, a selected group of pictures (GOP) included within the subset, and decoding relatively-encoded frame(s) of the GOP in a forward sequential frame order. The method further includes caching relatively-encoded frame(s) of the GOP in the forward sequential frame order, and when caching, dropping and overwriting relatively-encoded frame(s) of the GOP selectively according to a memory constraint and/or a display constraint. The method further includes displaying relatively-encoded frame(s) of the GOP in a ...

Подробнее
02-02-2017 дата публикации

REDUCED SIZE INVERSE TRANSFORM FOR DECODING AND ENCODING

Номер: US20170034530A1
Принадлежит: MICROSOFT TECHNOLOGY LICENSING, LLC

Innovations are provided for encoding and/or decoding video and/or image content using reduced size inverse transforms. For example, a reduced size inverse transform can be performed during encoding or decoding of video or image content using a subset of coefficients (e.g., primarily non-zero coefficients) of a given block. For example, a bounding area can be determined for a block that encompasses the non-zero coefficients of the block. Meta-data for the block can then be generated, including a shortcut code that indicates whether a reduced size inverse transform will be performed. The inverse transform can then be performed using a subset of coefficients for the block (e.g., identified by the bounding area) and the meta-data, which results in decreased utilization of computing resources. The subset of coefficients and the meta-data can be transferred to a graphics processing unit (GPU), which also results in savings in terms of data transfer. 1. A computing device comprising:a central processing unit; anda graphics processing unit; determining a bounding area for the block that represents an area of non-zero coefficients of the block;', 'generating meta-data for the block, the meta-data comprising a shortcut code indicating a reduced size inverse transform for the block; and', a subset of coefficients of the block corresponding to the bounding area for the block; and', 'the meta-data for the block;', 'wherein the graphics processing unit performs the reduced size inverse transform for the block using the subset of the coefficients of the block according to the meta-data for the block., 'transferring to the graphics processing unit], 'the computing device configured to perform operations during video or image encoding or decoding, the operations comprising, for each block of a plurality of blocks of a picture2. The computing device of wherein the bounding area is defined by x and y dimensions that divide the coefficients of the block into two groups:a first group ...

Подробнее
03-03-2016 дата публикации

Thumbnail Generation

Номер: US20160064039A1
Автор: Yongjun Wu, Shyam Sadhwani
Принадлежит: Microsoft Technology Licensing LLC

Thumbnail generation techniques are described. In one or more implementations, at least one thumbnail is generated by a device from video received at the device. The generation of the at least one thumbnail includes decoding at least one I-picture included in the video when present that is to serve as a basis for the at least one thumbnail and skipping decoding of non-I-pictures that describe differences in relation to the at least one I-picture included in the video such that the non-I-pictures are not utilized in the generating of the at least one thumbnail. For robust thumbnail generation, when at least one I-picture has not been identified in the video in a predetermined time, falling back to decoding subsequent non-I-pictures in the video to generate the thumbnail from non-I-pictures.

Подробнее
07-02-2013 дата публикации

REDUCED LATENCY VIDEO STABILIZATION

Номер: US20130033612A1
Принадлежит: Microsoft Corporation

Reduced latency video stabilization methods and tools generate truncated filters for use in the temporal smoothing of global motion transforms representing jittery motion in captured video. The truncated filters comprise future and past tap counts that can be different from each other and are typically less than those of a baseline filter providing a baseline of video stabilization quality. The truncated filter future tap count can be determined experimentally by comparing a smoothed global motion transform set generated by applying a baseline filter to a video segment to those generated by multiple test filter with varying future tap counts, then settings the truncated filter future tap count based on an inflection point on an error-future tap count curve. A similar approach can be used to determine the truncated filter past tap count.

Подробнее
25-05-2023 дата публикации

ENCODER-SIDE SEARCH RANGES HAVING HORIZONTAL BIAS OR VERTICAL BIAS

Номер: US20230164349A1
Принадлежит: Microsoft Technology Licensing, LLC

Innovations in encoder-side search ranges having horizontal bias or vertical bias are described herein. For example, a video encoder determines a block vector (“BV”) for a current block of a picture, performs intra prediction for the current block using the BV, and encodes the BV. The BV indicates a displacement to a region within the picture. When determining the BV, the encoder checks a constraint that the region is within a BV search range having a horizontal bias or vertical bias. The encoder can select the BV search range from among multiple available BV search ranges, e.g., depending at least in part on BV values of one or more previous blocks, which can be tracked in a histogram data structure.

Подробнее
05-04-2022 дата публикации

Content delivery of live streams with playback-conditions-adaptive encoding

Номер: US0011297355B1
Принадлежит: Amazon Technologies, Inc.

Techniques are described for creating and using playback-conditions-adaptive live video encoding ladders.

Подробнее
21-03-2024 дата публикации

CUSTOM DATA INDICATING NOMINAL RANGE OF SAMPLES OF MEDIA CONTENT

Номер: US20240098320A1
Принадлежит: Microsoft Technology Licensing, LLC

A media processing tool adds custom data to an elementary media bitstream or media container. The custom data indicates nominal range of samples of media content, but the meaning of the custom data is not defined in the codec format or media container format. For example, the custom data indicates the nominal range is full range or limited range. For playback, a media processing tool parses the custom data and determines an indication of media content type. A rendering engine performs color conversion operations whose logic changes based at least in part on the media content type. In this way, a codec format or media container format can in effect be extended to support full nominal range media content as well as limited nominal range media content, and hence preserve full or correct color fidelity, while maintaining backward compatibility and conformance with the codec format or media container format.

Подробнее
26-05-2020 дата публикации

Combining encoded video streams

Номер: US0010666903B1
Принадлежит: Amazon Technologies, Inc., AMAZON TECH INC

Techniques are described by which multiple, independently encoded video streams may be combined into a single decodable video stream. These techniques take advantage of existing features of commonly used video codecs that support the independent encoding of different regions of an image frame (e.g., H.264 slices or HEVC tiles). Instead of including different parts of the same image, each region corresponds to the encoded image data of the frames of one of the independent video streams.

Подробнее
19-07-2018 дата публикации

JUST-IN-TIME VARIABLE ADAPTIVE ENCODING AND DELIVERY OF MEDIA CONTENT

Номер: US20180205778A1
Принадлежит:

Techniques are described for just-in-time variable adaptive encoding and delivery of media content. Fragments of media content are encoded at bitrates corresponding to available bandwidth of client devices. If the available bandwidth changes, the bitrate at which fragments are being encoded is adjusted to correspond with the changed bandwidth. 1. A computer-implemented method , comprising:receiving a request from a first client device for a portion of first media content;receiving a request from a second client device for a portion of second media content;determining a complexity associated with the portion of the first media content;determining a complexity associated with the portion of the second media content;determining available bandwidth of the first client device;determining available bandwidth of the second client device;determining available resources of one or more media servers communicating with the first client device and the second client device;selecting a first bitrate according to the available bandwidth of the first client device, the complexity associated with the portion of the first media content, the complexity associated with the portion of the second media content, and the available resources of the one or more media servers;selecting a second bitrate according to the available bandwidth of the second client device, the complexity associated with the portion of the second media content, the complexity associated with the portion of the first media content, and the available resources of the one or more media servers;using a first encoder, encoding a first fragment corresponding to the portion of the first media content at the first bitrate;using a second encoder, encoding a second fragment corresponding to the portion of second the media content at the second bitrate;providing the first fragment to the first client device;providing the second fragment to the second client devicereceiving an additional request for an additional portion of the ...

Подробнее
06-06-2013 дата публикации

ADAPTIVE CONTROL OF DISPLAY REFRESH RATE BASED ON VIDEO FRAME RATE AND POWER EFFICIENCY

Номер: US20130141642A1
Принадлежит: Microsoft Corporation

A battery operated device, having a display with two or more available refresh rates, has its refresh rate selected so as to match the video frame rate of video data played back on the display. This selection is made by coordinating the resources in the device that are used to process the video from its reception through to its display. 1. A computer-implemented process comprising:determining a video frame rate for video data including a sequence of images to be played back at the video frame rate;determining available refresh rates for a display where the moving picture is to be displayed;selecting a refresh rate of the display from among the available refresh rates according to the determined rate of display of images; andsetting the refresh rate of the display to the selected refresh rate.2. The computer-implemented process of claim 1 , wherein determining the video frame rate comprises reading metadata from data including a bitstream encoding the moving picture claim 1 , the metadata including data defining the video frame rate.3. The computer-implemented process of claim 2 , wherein determining the video frame rate comprises received reliability data regarding the video frame rate.4. The computer-implemented process of claim 2 , wherein the metadata comprises presentation time stamps associated with each image.5. The computer-implemented process of claim 1 , wherein determining the available refresh rates comprises requesting the available refresh rates for the display.6. The computer-implemented process of claim 1 , wherein determining the available refresh rates comprises receiving the available refresh rates for the display.7. The computer-implemented process of claim 1 , wherein selecting the refresh rate comprises selecting a refresh rate higher than the video frame rate.8. The computer-implemented process of claim 7 , wherein the refresh rate is an integer multiple of the rate of display.9. The computer-implemented process of claim 7 , wherein the refresh ...

Подробнее
08-12-2020 дата публикации

Manifest data for server-side media fragment insertion

Номер: US0010863211B1
Принадлежит: Amazon Technologies, Inc., AMAZON TECH INC

Methods and apparatus are described for providing media presentations that include content originating from multiple sources. Techniques disclosed include server-side logic for inserting secondary content, such as advertisements, into primary content, such as a VOD presentation. Systems implementing the disclosed techniques can support different viewer device capabilities relating to displaying media presentations that include content from multiple sources.

Подробнее
10-03-2016 дата публикации

Lossy Data Stream Decoder

Номер: US20160072773A1
Принадлежит: Microsoft Technology Licensing LLC

Lossy data stream decoder techniques are described herein. In response to a request for decoded content from a consuming application, a decoder may validate headers and identify portions of the data that are considered pertinent to the request. The decoder then performs lossy extraction to form incomplete data that is provided to the consuming application in response to the request. The full data for the data stream is not exposed to the consuming application or other downstream components. In this way, the consuming application is provided data sufficient to perform requested graphics processing and resource management operations, while at the same time the risk of piracy is mitigated since the consuming application is unable to get a full version of the data in the clear and the data have been validated by the decoder.

Подробнее
07-09-2021 дата публикации

Supplemental enhancement information including confidence level and mixed content information

Номер: US0011115668B2

This application relates to video encoding and decoding, and specifically to tools and techniques for using and providing supplemental enhancement information in bitstreams. Among other things, the detailed description presents innovations for bitstreams having supplemental enhancement information (SEI). In particular embodiments, the SEI message includes picture source data (e.g., data indicating whether the associated picture is a progressive scan picture or an interlaced scan picture and/or data indicating whether the associated picture is a duplicate picture). The SEI message can also express a confidence level of the encoder's relative confidence in the accuracy of this picture source data. A decoder can use the confidence level indication to determine whether the decoder should separately identify the picture as progressive or interlaced and/or a duplicate picture or honor the picture source scanning information in the SEI as it is.

Подробнее
24-12-2019 дата публикации

Data format suitable for fast massively parallel general matrix multiplication in a programmable IC

Номер: US0010515135B1
Принадлежит: XILINX, INC., XILINX INC, Xilinx, Inc.

Methods and apparatus are described for performing data-intensive compute algorithms, such as fast massively parallel general matrix multiplication (GEMM), using a particular data format for both storing data to and reading data from memory. This data format may be utilized for arbitrarily-sized input matrices for GEMM implemented on a finite-size GEMM accelerator in the form of a rectangular compute array of digital signal processing (DSP) elements or similar compute cores. This data format solves the issue of double data rate (DDR) dynamic random access memory (DRAM) bandwidth by allowing both linear DDR addressing and single cycle loading of data into the compute array, avoiding input/output (I/O) and/or DDR bottlenecks.

Подробнее
25-03-2021 дата публикации

SIGNALING OF STATE INFORMATION FOR A DECODED PICTURE BUFFER AND REFERENCE PICTURE LISTS

Номер: US20210092441A1
Принадлежит: Microsoft Technology Licensing, LLC

Innovations for signaling state of a decoded picture buffer (“DPB”) and reference picture lists (“RPLs”). In example implementations, rather than rely on internal state of a decoder to manage and update DPB and RPLs, state information about the DPB and RPLs is explicitly signaled. This permits a decoder to determine which pictures are expected to be available for reference from the signaled state information. For example, an encoder determines state information that identifies which pictures are available for use as reference pictures (optionally considering feedback information from a decoder about which pictures are available). The encoder sets syntax elements that represent the state information. In doing so, the encoder sets identifying information for a long-term reference picture (“LTRP”), where the identifying information is a value of picture order count least significant bits for the LTRB. The encoder then outputs the syntax elements as part of a bitstream. 144.-. (canceled)45. A computing system comprising: determining a set of reference pictures available for the current picture, the set of reference pictures including at least one long-term reference picture (“LTRP”);', 'determining LTRP status information for the current picture, wherein the LTRP status information for the current picture identifies which pictures, if any, among the set of reference pictures are available for use as LTRPs for the current picture; and', 'setting syntax elements that represent the LTRP status information for the current picture, including setting identifying information for a given LTRP in the LTRP status information for the current picture, wherein the identifying information for the given LTRP is a value of picture order count least significant bits (“POC LSB s”) for the given LTRP for the current picture, the value of POC LSBs for the given LTRP having a variable number of bits, and the value of the POC LSBs for the given LTRP, modulo a most significant bit wrapping ...

Подробнее
23-03-2021 дата публикации

Content delivery of live streams with playback-conditions-adaptive encoding

Номер: US0010958947B1
Принадлежит: Amazon Technologies, Inc., AMAZON TECH INC

Techniques are described for creating and using playback-conditions-adaptive live video encoding ladders.

Подробнее
07-09-2021 дата публикации

Custom data indicating nominal range of samples of media content

Номер: US0011115691B2

A media processing tool adds custom data to an elementary media bitstream or media container. The custom data indicates nominal range of samples of media content, but the meaning of the custom data is not defined in the codec format or media container format. For example, the custom data indicates the nominal range is full range or limited range. For playback, a media processing tool parses the custom data and determines an indication of media content type. A rendering engine performs color conversion operations whose logic changes based at least in part on the media content type. In this way, a codec format or media container format can in effect be extended to support full nominal range media content as well as limited nominal range media content, and hence preserve full or correct color fidelity, while maintaining backward compatibility and conformance with the codec format or media container format.

Подробнее
12-01-2021 дата публикации

Subtitle processing for devices with limited memory

Номер: US0010893331B1
Принадлежит: Amazon Technologies, Inc., AMAZON TECH INC

Methods and apparatus are described for reducing subtitle information for just-after-broadcast (JAB) content. Redundant information in subtitle entries is removed so that some client devices with limited memory can handle the single subtitle file that is delivered with JAB content.

Подробнее
19-01-2021 дата публикации

Content delivery of live streams with event-adaptive encoding

Номер: US0010897654B1
Принадлежит: Amazon Technologies, Inc., AMAZON TECH INC

Techniques are described for optimizing event-adaptive live video encoding profiles.

Подробнее
07-06-2022 дата публикации

Client-side caching of media content

Номер: US0011356516B1
Принадлежит: Amazon Technologies, Inc.

Methods and apparatus are described for facilitating the client-side caching of media content based on one or more properties of the media content. Information relating to the cacheability of different types of content is communicated to the media player on a client device in the manifest or playlist employed by the media player to request fragments of the media content. The media player uses this information to make decisions about how to cache the corresponding content.

Подробнее
17-01-2017 дата публикации

Data unit identification for compressed video streams

Номер: US0009549196B2

Data unit identification for compressed video streams is described. In one or more implementations, a compressed video stream is received at a computing device and a determination is made as to whether prior knowledge is available that relates to the compressed video stream. Responsive to the determination that prior knowledge is available that relates to the compressed video stream, the prior knowledge is employed by the computing device to perform data unit identification for the compressed video stream. In one or more implementations, SIMD instructions are utilized to perform pattern (0x00 00) search in a batch mode. Then a byte-by-byte search is performed to confirm whether the pattern, 0x00 00, found is part of a start code, 0x00 00 01, or not.

Подробнее
13-04-2017 дата публикации

RECEIVER-SIDE MODIFICATIONS FOR REDUCED VIDEO LATENCY

Номер: US20170105010A1
Принадлежит:

A host has a graphics pipeline that process frames by portions (e.g., pixels or rows) or slices. A remote device transmits a video stream container via a network to the host. A frame of the video stream in the container has encoded portions. The graphics pipeline includes a demultiplexer that extracts the portions of the video frame. When a portion has been extracted it is passed to a decoder, which is next in the pipeline. The decoder may begin decoding the portion before receiving a next portion of the frame, possibly while the demultiplexer is demultiplexing the next portion of the frame. A decoded portion of the frame is passed to a renderer which accumulates the portions of the frame and renders the frame. At any time portions of a frame might concurrently be being received, demultiplexed, decoded, and rendered. The decoder may be single-threaded, multi-threaded, or hardware accelerated. 1. A computing device comprising:processing hardware, storage hardware, and a network interface configured to receive packets containing multimedia container portions comprising encoded slices of a video frame, the packets received via a network from a host that encoded the encoded slices and that generated the container portions;a demultiplexer configured to demultiplex the encoded slices of the video frame from the container portions; anda decoder configured to receive and decompress the encoded slices of the video frame from the demultliplexer, wherein the decoder receives a demultiplexed encoded slice of the video frame from the demultiplexer before another encoded slice of the video frame has been demultiplexed by the demultiplexer.2. A computing device according to claim 1 , wherein the demultiplexer is configured to demultiplex the other encoded slice of the video frame while the decoder is decompressing the encoded slice of the video frame.3. A computing device according to claim 2 , wherein the computing device further comprises a renderer claim 2 , wherein the ...

Подробнее
22-11-2016 дата публикации

Array substrate and liquid crystal display device

Номер: US0009500924B2

The array substrate includes: a substrate; a plurality of scan lines and a plurality of data lines disposed on the substrate intersecting each other to define a plurality of pixel elements and insulated from each other; a first transparent conductive layer disposed on the substrate; and a second transparent conductive layer disposed on the substrate and in parallel to and insulated from the first transparent conductive layer. The data lines, the first transparent conductive layer, and the second transparent conductive layer each comprise a plurality of bended portions, and the bended portions of the second transparent conductive layer are parallel to those of the first transparent conductive layer; additionally or alternatively; the bended portions of the data lines are parallel to those of the first transparent conductive layer or the second transparent conductive layer.

Подробнее
24-01-2012 дата публикации

Method and apparatus for mechanically splicing optic fibers

Номер: US0008103144B1

A method and apparatus for mechanically splicing a pair of optic fibers or optic cables, the mechanical splice comprising: a ferrule having an axial capillary bore, the capillary bore configured to enclose the optic fibers at both ends of the ferrule; and cured epoxy disposed to secure together the ends of the optic fibers and to secure the optic fibers to an inside surface of the capillary bore, the ferrule optionally enclosed in a metal tube.

Подробнее
05-03-2013 дата публикации

Smooth rewind media playback

Номер: US0008391688B2

Systems and methods for smooth rewind playback of streamed media are provided. The media includes relatively-encoded frames and independently-encoded frames. The method includes receiving a rewind request indicating a rewind speed for rewind playback of the media, selectively dropping relatively-encoded frame(s) based on a receipt constraint and a decoding constraint to form a subset of the media, and receiving frames of the subset. The method further includes selecting, in a reverse order, a selected group of pictures (GOP) included within the subset, and decoding relatively-encoded frame(s) of the GOP in a forward sequential frame order. The method further includes caching relatively-encoded frame(s) of the GOP in the forward sequential frame order, and when caching, dropping and overwriting relatively-encoded frame(s) of the GOP selectively according to a memory constraint and/or a display constraint. The method further includes displaying relatively-encoded frame(s) of the GOP in a ...

Подробнее
26-09-2012 дата публикации

Global alignment for high-dynamic range image generation

Номер: CN102693538A
Принадлежит:

Techniques and tools for high dynamic range (HDR) image generation and rendering are described herein. In several described embodiments, images having distinct exposure levels are aligned. In particular embodiments, the alignment of a reference image to a non-reference image is based at least in part on motion vectors that are determined using covariance computations. Furthermore, in certain embodiments, saturated areas, underexposed areas, and/or moving objects are ignored or substantially ignored during the image alignment process. Moreover, in certain embodiments, a hierarchical pyramid block-based scheme is used to perform local motion estimation between the reference image and the non-reference image.

Подробнее
03-03-2020 дата публикации

Excluding masked regions of virtual reality (VR) frames from encoder processing

Номер: US0010580167B1
Принадлежит: Amazon Technologies, Inc., AMAZON TECH INC

Techniques are described that enable a two-dimensional (2D) representation of three-dimensional (3D) virtual reality (VR) content to be encoded. These techniques include encoding VR content while excluding non-display pixels of the VR content from motion estimation during encoder processing.

Подробнее
17-03-2020 дата публикации

Mirroring edge pixels

Номер: US0010593122B1
Принадлежит: Amazon Technologies, Inc., AMAZON TECH INC

Techniques are described that enable a two-dimensional (2D) representation of three-dimensional (3D) virtual reality content to be generated and encoded. These techniques include modifying non-display pixels within the 2D representation to soften the transitions between display pixels and non-display pixels.

Подробнее
23-07-2015 дата публикации

INTRA BLOCK COPY PREDICTION WITH ASYMMETRIC PARTITIONS AND ENCODER-SIDE SEARCH PATTERNS, SEARCH RANGES AND APPROACHES TO PARTITIONING

Номер: US20150208084A1
Принадлежит: MICROSOFT CORPORATION

Innovations in intra block copy (“BC”) prediction as well as innovations in encoder-side search patterns and approaches to partitioning. For example, some of the innovations relate to use of asymmetric partitions for intra BC prediction. Other innovations relate to search patterns or approaches that an encoder uses during block vector estimation (for intra BC prediction) or motion estimation. Still other innovations relate to uses of BV search ranges that have a horizontal or vertical bias during BV estimation. 1. In a computing device that implements an image or video encoder , a method comprising:encoding an image or video to produce encoded data, including performing intra block copy (“BC”) prediction for a current block that is asymmetrically partitioned for the intra BC prediction; andoutputting the encoded data as part of a bitstream.2. The method of wherein the current block is a 2N×2N block claim 1 , and wherein the current block is partitioned into (1) a 2N×N/2 block and 2N×3N/2 block or (2) a 2N×3N/2 block and 2N×N/2 block.3. The method of wherein the current block is a 2N×2N block claim 1 , and wherein the current block is partitioned into (1) an N/2×2N block and 3N/2×2N block or (2) a 3N/2×2N block and N/2×2N block.4. The method of wherein the encoding further includes performing intra BC prediction for another block that is symmetrically partitioned for the intra BC prediction claim 1 , wherein the other block is a 2N×2N block claim 1 , and wherein the other block is partitioned into (1) two 2N×N blocks claim 1 , (2) two N×2N blocks claim 1 , or (3) four N×N blocks claim 1 , each of which can be further partitioned into two N×N/2 blocks claim 1 , two N/2×N blocks or four N/2×N/2 blocks.5. The method of wherein the current block is a 64×64 block claim 1 , 32×32 block claim 1 , 16×16 block or 8×8 block.6. The method of wherein the video is artificially-created video.7. A computing device that implements an image or video decoder claim 1 , wherein the ...

Подробнее
01-12-2016 дата публикации

DECODING OF INTRA-PREDICTED IMAGES

Номер: US20160353128A1
Принадлежит:

In a computer with a graphics processing unit as a coprocessor of a central processing unit, the graphics processing unit is programmed to perform waves of parallel operations to decode intra-prediction blocks of an image encoded in a certain video coding format. To decode the intra-prediction blocks of an image using the graphics processing unit, the intra-predicted blocks and their reference blocks are identified. The computer identifies whether pixel data from the reference blocks for these intra-predicted blocks are available. Blocks for which pixel data from reference blocks are available are processed in waves of parallel operations on the graphics processing unit as the pixel data becomes available. The process repeats until all intra-predicted blocks are processed. The identification of blocks to process in each wave can be determined by the graphics processing unit or the central processing unit.

Подробнее
20-06-2013 дата публикации

HARDWARE-ACCELERATED DECODING OF SCALABLE VIDEO BITSTREAMS

Номер: US20130156101A1
Принадлежит: Microsoft Corporation

In various respects, hardware-accelerated decoding is adapted for decoding of video that has been encoded using scalable video coding. For example, for a given picture to be decoded, a host decoder determines whether a corresponding base picture will be stored for use as a reference picture. If so, the host decoder directs decoding with an accelerator such that the some of the same decoding operations can be used for the given picture and the reference base picture. Or, as another example, the host decoder groups encoded data associated with a given layer representation in buffers. The host decoder provides the encoded data for the layer to the accelerator. The host decoder repeats the process layer-after-layer in the order that layers appear in the bitstream, according to a defined call pattern for an acceleration interface, which helps the accelerator determine the layers with which buffers are associated. 1. A tangible computer-readable medium storing computer-executable instructions for causing a computing system to perform a method comprising:receiving at least part of a bitstream for video data having been encoded using scalable video coding, the bitstream including encoded data for a given picture to be decoded for output, the given picture having a reference base picture to be stored for use as a reference picture; andwith a host decoder, calling an acceleration interface to direct decoding of the given picture and decoding of the reference base picture by an accelerator, including interleaving at least some calls for the decoding of the reference base picture with at least some calls for the decoding of the given picture.2. The computer-readable medium of wherein the interleaving facilitates recognition by the accelerator of opportunities to share operations between the decoding of the reference base picture and the decoding of the given picture.3. The computer-readable medium of wherein the calling includes:calling a first routine to signal initiation of ...

Подробнее
17-08-2017 дата публикации

Joint Video Stabilization and Rolling Shutter Correction on a Generic Platform

Номер: US20170236257A1
Принадлежит: Microsoft Technology Licensing, LLC

In one embodiment, a video processing system may filter a video data set to correct skew and wobble using a central processing unit and a graphical processing unit . The video processing system may apply a rolling shutter effect correction filter to an initial version of a video data set. The video processing system may simultaneously apply a video stabilization filter to the initial version to produce a final version video data set. 1. A machine-implemented method , comprising:determining a filtering apportionment between a graphical processing unit and a central processing unit based on a prior filter performance;applying a rolling shutter effect correction filter to an initial version of a video data set; andapplying a video stabilization filter to the initial version to produce a final version of the video data set.2. The method of claim 1 , further comprising:executing a motion estimation on the initial version using the graphical processing unit to create a down sample set.3. The method of claim 1 , further comprising:processing a down sample set of the initial version using the central processor unit to create a motion vector set.4. The method of claim 1 , further comprising:warping an image of the initial version by applying a motion vector set using the graphical processing unit.5. The method of claim 4 , further comprising:adjusting a warping constant on the motion vector set based on a previous iteration.6. The method of claim 1 , further comprising:creating a preview proxy of the video data set using the rolling shutter effect correction filter and the video stabilization filter.7. The method of claim 1 , further comprising:caching a preview proxy set of the video data set.8. The method of claim 1 , further comprising:receiving a user selection of a preview proxy of a preview proxy set; andcreating the final version based on the user selection.9. The method of claim 1 , further comprising:setting a filter parameter for the rolling shutter effect ...

Подробнее
16-02-2021 дата публикации

Signaling of state information for a decoded picture buffer and reference picture lists

Номер: US0010924760B2

Innovations for signaling state of a decoded picture buffer (“DPB”) and reference picture lists (“RPLs”). In example implementations, rather than rely on internal state of a decoder to manage and update DPB and RPLs, state information about the DPB and RPLs is explicitly signaled. This permits a decoder to determine which pictures are expected to be available for reference from the signaled state information. For example, an encoder determines state information that identifies which pictures are available for use as reference pictures (optionally considering feedback information from a decoder about which pictures are available). The encoder sets syntax elements that represent the state information. In doing so, the encoder sets identifying information for a long-term reference picture (“LTRP”), where the identifying information is a value of picture order count least significant bits for the LTRB. The encoder then outputs the syntax elements as part of a bitstream.

Подробнее
12-02-2019 дата публикации

Syntax structures indicating completion of coded regions

Номер: US0010205966B2

Syntax structures that indicate the completion of coded regions of pictures are described. For example, a syntax structure in an elementary bitstream indicates the completion of a coded region of a picture. The syntax structure can be a type of network abstraction layer unit, a type of supplemental enhancement information message or another syntax structure. For example, a media processing tool such as an encoder can detect completion of a coded region of a picture, then output, in a predefined order in an elementary bitstream, syntax structure(s) that contain the coded region as well as a different syntax structure that indicates the completion of the coded region. Another media processing tool such as a decoder can receive, in a predefined order in an elementary bitstream, syntax structure(s) that contain a coded region of a picture as well as a different syntax structure that indicates the completion of the coded region.

Подробнее
07-05-2024 дата публикации

Encoder-side search ranges having horizontal bias or vertical bias

Номер: US0011979601B2
Принадлежит: Microsoft Technology Licensing, LLC

Innovations in encoder-side search ranges having horizontal bias or vertical bias are described herein. For example, a video encoder determines a block vector (“BV”) for a current block of a picture, performs intra prediction for the current block using the BV, and encodes the BV. The BV indicates a displacement to a region within the picture. When determining the BV, the encoder checks a constraint that the region is within a BV search range having a horizontal bias or vertical bias. The encoder can select the BV search range from among multiple available BV search ranges, e.g., depending at least in part on BV values of one or more previous blocks, which can be tracked in a histogram data structure.

Подробнее
27-10-2016 дата публикации

VIDEO ENCODER MANAGEMENT STRATEGIES

Номер: US20160316220A1
Принадлежит: Microsoft Technology Licensing, LLC

Innovations in how a host application and video encoder share information and use shared information during video encoding are described. The innovations can help the video encoder perform certain encoding operations and/or help the host application control overall encoding quality and performance. For example, the host application provides regional motion information to the video encoder, which the video encoder can use to speed up motion estimation operations for units of a current picture and more generally improve the accuracy and quality of motion estimation. Or, as another example, the video encoder provides information about the results of encoding the current picture to the host application, which the host application can use to determine when to start a new group of pictures at a scene change boundary. By sharing information in this way, the host application and the video encoder can improve encoding performance, especially for real-time communication scenarios.

Подробнее
30-08-2012 дата публикации

GLOBAL ALIGNMENT FOR HIGH-DYNAMIC RANGE IMAGE GENERATION

Номер: US20120218442A1
Принадлежит: Microsoft Corporation

Techniques and tools for high dynamic range (“HDR”) image generation and rendering are described herein. In several described embodiments, images having distinct exposure levels are aligned. In particular embodiments, the alignment of a reference image to a non-reference image is based at least in part on motion vectors that are determined using covariance computations. Furthermore, in certain embodiments, saturated areas, underexposed areas, and/or moving objects are ignored or substantially ignored during the image alignment process. Moreover, in certain embodiments, a hierarchical pyramid block-based scheme is used to perform local motion estimation between the reference image and the non-reference image. 1. A method of generating a high dynamic range digital image , the method comprising: performing motion analysis between a reference image and a set of one or more non-reference images, the motion analysis comprising computing one or more covariance values between a set of pixel sample values in the reference image and a set of pixel sample values in one of the non-reference images from the set; and', 'based at least in part on the motion analysis, merging at least the one of the non-reference images from the set with the reference image to form a higher dynamic range digital image., 'using a computing device,'}2. The method of claim 1 , wherein each non-reference image in the set of one or more non-reference images is a digital image having a different exposure level than the reference image.3. The method of claim 1 , wherein the set of one or more non-reference images comprises a non-reference image with a lower exposure level than the reference image and a non-reference image with a higher exposure level than the reference image.4. The method of claim 1 , wherein the performing motion analysis comprises:estimating two or more local motion vectors for corresponding block in the one of the non-reference images from the set relative to the reference image; ...

Подробнее
20-09-2022 дата публикации

Supplemental enhancement information including confidence level and mixed content information

Номер: US0011451795B2
Принадлежит: Microsoft Technology Licensing, LLC

This application relates to video encoding and decoding, and specifically to tools and techniques for using and providing supplemental enhancement information in bitstreams. Among other things, the detailed description presents innovations for bitstreams having supplemental enhancement information (SEI). In particular embodiments, the SEI message includes picture source data (e.g., data indicating whether the associated picture is a progressive scan picture or an interlaced scan picture and/or data indicating whether the associated picture is a duplicate picture). The SEI message can also express a confidence level of the encoder's relative confidence in the accuracy of this picture source data. A decoder can use the confidence level indication to determine whether the decoder should separately identify the picture as progressive or interlaced and/or a duplicate picture or honor the picture source scanning information in the SEI as it is.

Подробнее
11-08-2020 дата публикации

Scalable video coding techniques

Номер: US0010743003B1
Принадлежит: Amazon Technologies, Inc., AMAZON TECH INC

Techniques are described that enable virtual reality content to be delivered using a video codec that operates according to a scalable video encoding standard. These techniques include selectively downloading and decoding frames of video content.

Подробнее
23-05-2019 дата публикации

SYNTAX STRUCTURES INDICATING COMPLETION OF CODED REGIONS

Номер: US20190158881A1
Принадлежит: Microsoft Technology Licensing, LLC

Syntax structures that indicate the completion of coded regions of pictures are described. For example, a syntax structure in an elementary bitstream indicates the completion of a coded region of a picture. The syntax structure can be a type of network abstraction layer unit, a type of supplemental enhancement information message or another syntax structure. For example, a media processing tool such as an encoder can detect completion of a coded region of a picture, then output, in a predefined order in an elementary bitstream, syntax structure(s) that contain the coded region as well as a different syntax structure that indicates the completion of the coded region. Another media processing tool such as a decoder can receive, in a predefined order in an elementary bitstream, syntax structure(s) that contain a coded region of a picture as well as a different syntax structure that indicates the completion of the coded region. 120.-. (canceled)21. One or more computer-readable media having stored thereon computer-executable instructions for causing a processor , when programmed thereby , to perform operations comprising:receiving, in an elementary bitstream, one or more syntax structures that contain a coded region for a region of an image or video, and, after the one or more syntax structures that contain the coded region, a different syntax structure, the different syntax structure including a next slice segment address that indicates a slice segment address for a next slice segment header when the slice segment address for the next slice segment header is present in the elementary bitstream; anddetecting the completion of the coded region using the different syntax structure.22. The one or more computer-readable media of claim 21 , wherein the different syntax structure is a supplemental enhancement information (“SEI”) message having a payload type that designates the SEI message as an end-of-region indicator.23. The one or more computer-readable media of claim 21 , ...

Подробнее
18-04-2019 дата публикации

STATIC BLOCK SCHEDULING IN MASSIVELY PARALLEL SOFTWARE DEFINED HARDWARE SYSTEMS

Номер: US20190114548A1
Принадлежит: Xilinx, Inc.

Embodiments herein describe techniques for static scheduling a neural network implemented in a massively parallel hardware system. The neural network may be scheduled using three different scheduling levels referred to herein as an upper level, an intermediate level, and a lower level. In one embodiment, the upper level includes a hardware or software model of the layers in the neural network that establishes a sequential order of functions that operate concurrently in the hardware system. In the intermediate level, identical processes in the functions defined in the upper level are connected to form a systolic array or mesh and balanced data flow channels are used to minimize latency. In the lower level, a compiler can assign the operations performed by the processing elements in the systolic array to different portions of the hardware system to provide a static schedule for the neural network. 1. A method for scheduling a neural network , the method comprising:receiving a model defining a sequential order of a plurality of pipelined functions performed when executing at least one layer in the neural network, wherein the neural network comprises a plurality of layers;receiving a systolic array for executing identical processes in the at least one layer of the neural network; andcompiling, using one or more computing processors, source code corresponding to the model and the systolic array into a hardware level design that provides a static schedule when executing the neural network in a hardware system.2. The method of claim 1 , further comprising:configuring a field programmable gate array (FPGA) based on the hardware level design, wherein the hardware level design comprises register transfer level (RTL) code.3. The method of claim 1 , wherein compiling the source code of the systolic array comprises:converting the source the source code of the systolic array into a two dimensional array of interconnected processing elements.4. The method of claim 3 , wherein ...

Подробнее
19-09-2023 дата публикации

Dynamic congestion control through real-time QOS monitoring in video streaming

Номер: US0011765217B1
Принадлежит: Amazon Technologies, Inc.

Methods and apparatus are described for providing thinned manifests during a live event. As network usage of a regional internet service provider (ISP) or content delivery network (CDN) becomes unsustainable, new streaming sessions for a live event are provided a thinned manifest that does not have playback options for bitrates above a bitrate limit.

Подробнее
01-10-2019 дата публикации

Streaming media file management

Номер: US0010432686B1

A system for delivering live streaming content based on accurate media data fragment size and duration. The system may include a client media player to receive a portion of a streaming media file (e.g., in an MP4 format), download a first sub-portion of the streaming media file including fragment-level metadata, and parse and analyze the fragment-level metadata to determine a size and duration of a current fragment of the media file. A media server may generate custom data identifying a size and duration of a current fragment of a media file. The media server may insert the custom data (e.g., as a custom header or unique packet identifier) and send the custom data to a client media player. The client media player may be configured to decode the custom data and determine the current fragment size and duration.

Подробнее
23-06-2016 дата публикации

Protected Media Decoding System Supporting Metadata

Номер: US20160182952A1
Принадлежит:

Video content is protected using a digital rights management (DRM) mechanism, the video content having been previously encrypted and compressed for distribution, and also including metadata such as closed captioning data, which might be encrypted or clear. The video content is obtained by a system of a computing device, the metadata is extracted from the video content and provided to a video decoder, and the video content is provided to a secure DRM component. The secure DRM component decrypts the video content and provides the decrypted video content to a secure decoder component of a video decoder. As part of the decryption, the secure DRM component drops the metadata that was included in the obtained video content. However, the video decoder receives the extracted metadata in a non-protected environment, and thus is able to provide the extracted metadata and the decoded video content to a content playback application. 1. A method implemented in a computing device , the method comprising:obtaining video content from a media source, the video content including multiple video frames that include metadata as well as protected video content;extracting the metadata from the multiple video frames;providing the extracted metadata to a video decoder;providing the multiple video frames to a secure digital rights management component;receiving, from the secure digital rights management component, a re-encrypted version of the multiple video frames, the re-encrypted version of the multiple video frames comprising a version of the multiple video frames from which the protected video content has been decrypted and re-encrypted based on a key of the computing device;providing the re-encrypted version of the multiple video frames to the video decoder for decoding of the re-encrypted version of the multiple video frames rather than decoding of the multiple video frames; andproviding the extracted metadata and the decoded video frames to an application for playback.2. The method ...

Подробнее
03-04-2014 дата публикации

SUPPLEMENTAL ENHANCEMENT INFORMATION INCLUDING CONFIDENCE LEVEL AND MIXED CONTENT INFORMATION

Номер: US20140092992A1
Принадлежит: Microsoft Corporation

This application relates to video encoding and decoding, and specifically to tools and techniques for using and providing supplemental enhancement information in bitstreams. Among other things, the detailed description presents innovations for bitstreams having supplemental enhancement information (SEI). In particular embodiments, the SEI message includes picture source data (e.g., data indicating whether the associated picture is a progressive scan picture or an interlaced scan picture and/or data indicating whether the associated picture is a duplicate picture). The SEI message can also express a confidence level of the encoder's relative confidence in the accuracy of this picture source data. A decoder can use the confidence level indication to determine whether the decoder should separately identify the picture as progressive or interlaced and/or a duplicate picture or honor the picture source scanning information in the SEI as it is. 1. A method performed by an encoder device , comprising:encoding one or more pictures in a bitstream or bitstream portion, wherein the encoding includes encoding in the bitstream or bitstream portion one or more syntax elements for identifying a source scan type for the one or more pictures, the one or more syntax elements having at least a state indicating that the one or more pictures are of an interlaced scan type, a state indicating that the one or more pictures are of a progressive scan type, and a state indicating that the one or more pictures are of an unknown source scan type; andoutputting the bitstream or bitstream portion.2. The method of claim 1 , wherein the one or more syntax elements comprise a first flag indicating whether the one or more pictures are of an interlaced scan type and a second flag indicating whether the one or more pictures are of a progressive scan type.3. The method of claim 1 , wherein the one or more syntax elements comprise a single syntax element.4. The method of claim 1 , wherein the one or ...

Подробнее
14-06-2012 дата публикации

LOW-LATENCY VIDEO DECODING

Номер: US20120147973A1
Принадлежит: Microsoft Corporation

Techniques and tools for reducing latency in video decoding for real-time communication applications that emphasize low delay. For example, a tool such as a video decoder selects a low-latency decoding mode. Based on the selected decoding mode, the tool adjusts output timing determination, picture boundary detection, number of pictures in flight and/or jitter buffer utilization. For low-latency decoding, the tool can use a frame count syntax element to set initial output delay for a decoded picture buffer, and the tool can use auxiliary delimiter syntax elements to detect picture boundaries. To further reduce delay in low-latency decoding, the tool can reduce number of pictures in flight for multi-threaded decoding and reduce or remove jitter buffers. The tool receives encoded data, performs decoding according to the selected decoding mode to reconstruct pictures, and outputs the pictures for display. 1. In a computing device that implements a video decoder , a method comprising:selecting a low-latency decoding mode characterized by lower latency decoding compared to another decoding mode;based at least in part on the selected decoding mode, adjusting one or more of output timing determination, picture boundary detection, number of pictures in flight and jitter buffer utilization;receiving encoded data in a bitstream for a video sequence;with the computing device that implements the video decoder, decoding at least some of the encoded data according to the selected decoding mode to reconstruct a picture of the video sequence; andoutputting the picture for display.2. The method of wherein the output timing determination is adjusted claim 1 , wherein a frame count syntax element in the bitstream indicates a frame reordering delay claim 1 , and wherein initial output delay for a decoded picture buffer depends at least in part on the frame reordering delay.3. The method of wherein the frame reordering delay is a maximum count of frames that can precede a given frame in ...

Подробнее
28-02-2023 дата публикации

Encoder-side search ranges having horizontal bias or vertical bias

Номер: US0011595679B1
Принадлежит: Microsoft Technology Licensing, LLC

Innovations in encoder-side search ranges having horizontal bias or vertical bias are described herein. For example, a video encoder determines a block vector (“BV”) for a current block of a picture, performs intra prediction for the current block using the BV, and encodes the BV. The BV indicates a displacement to a region within the picture. When determining the BV, the encoder checks a constraint that the region is within a BV search range having a horizontal bias or vertical bias. The encoder can select the BV search range from among multiple available BV search ranges, e.g., depending at least in part on BV values of one or more previous blocks, which can be tracked in a histogram data structure.

Подробнее
09-05-2024 дата публикации

SUPPLEMENTAL ENHANCEMENT INFORMATION INCLUDING CONFIDENCE LEVEL AND MIXED CONTENT INFORMATION

Номер: US20240155135A1
Принадлежит: Microsoft Technology Licensing, LLC

This application relates to video encoding and decoding, and specifically to tools and techniques for using and providing supplemental enhancement information in bitstreams. Among other things, the detailed description presents innovations for bitstreams having supplemental enhancement information (SEI). In particular embodiments, the SEI message includes picture source data (e.g., data indicating whether the associated picture is a progressive scan picture or an interlaced scan picture and/or data indicating whether the associated picture is a duplicate picture). The SEI message can also express a confidence level of the encoder's relative confidence in the accuracy of this picture source data. A decoder can use the confidence level indication to determine whether the decoder should separately identify the picture as progressive or interlaced and/or a duplicate picture or honor the picture source scanning information in the SEI as it is.

Подробнее
21-11-2017 дата публикации

Reduced latency video stabilization

Номер: US0009824426B2

Reduced latency video stabilization methods and tools generate truncated filters for use in the temporal smoothing of global motion transforms representing jittery motion in captured video. The truncated filters comprise future and past tap counts that can be different from each other and are typically less than those of a baseline filter providing a baseline of video stabilization quality. The truncated filter future tap count can be determined experimentally by comparing a smoothed global motion transform set generated by applying a baseline filter to a video segment to those generated by multiple test filter with varying future tap counts, then settings the truncated filter future tap count based on an inflection point on an error-future tap count curve. A similar approach can be used to determine the truncated filter past tap count.

Подробнее
05-02-2019 дата публикации

Video bit stream decoding

Номер: US0010200707B2

Aspects extend to methods, systems, and computer program products for video bit stream decoding. Aspects include flexible definition and detection of surface alignment requirements for decoding hardware. Surface alignment requirements can be handled by render cropping (e.g., cropping at a video output device), through adjustment and modification of original syntax values in a video bit stream and relaxed media type negotiation in a software (host) decoder. Resolution changes can be hidden with the aligned surface allocation when applicable. Performance can be improved and power consumption reduced by using hidden resolution changes.

Подробнее
03-08-2017 дата публикации

REDUCING MEMORY USAGE BY A DECODER DURING A FORMAT CHANGE

Номер: US20170220283A1
Принадлежит: Microsoft Technology Licensing LLC

Techniques and systems for reducing memory usage by a decoder during a format change are disclosed. In a first example technique, discretized memory allocations for new output buffers are sequenced with discretized release operations of previously-allocated memory for previous output buffers in a manner that reduces the amount of in-use memory of a computing device during a format change. In a second example technique, the allocation of new memory for new decoder buffers associated with a new format is conditioned upon the release of previously-allocated memory for decoder buffers associated with a previous format to reduce memory usage during a format change. The first and second techniques, when combined, result in optimized reduction in memory usage by a decoder during a format change.

Подробнее
16-08-2018 дата публикации

VIDEO DECODER MEMORY OPTIMIZATION

Номер: US20180234691A1
Принадлежит: Amazon Technologies Inc

Techniques are described for optimizing video decoder operations.

Подробнее
16-08-2018 дата публикации

FRAME PACKING AND UNPACKING HIGHER-RESOLUTION CHROMA SAMPLING FORMATS

Номер: US20180234686A1
Принадлежит: Microsoft Technology Licensing, LLC

Video frames of a higher-resolution chroma sampling format such as YUV 4:4:4 are packed into video frames of a lower-resolution chroma sampling format such as YUV 4:2:0 for purposes of video encoding. For example, sample values for a frame in YUV 4:4:4 format are packed into two frames in YUV 4:2:0 format. After decoding, the video frames of the lower-resolution chroma sampling format can be unpacked to reconstruct the video frames of the higher-resolution chroma sampling format. In this way, available encoders and decoders operating at the lower-resolution chroma sampling format can be used, while still retaining higher resolution chroma information. In example implementations, frames in YUV 4:4:4 format are packed into frames in YUV 4:2:0 format such that geometric correspondence is maintained between Y, U and V components for the frames in YUV 4:2:0 format. 120.-. (canceled)21. A computing device comprising:one or more processing units;volatile memory; and receiving a video frame of a higher-resolution format, the video frame of the higher-resolution format including sample values of first, second, and third component planes;', 'assigning the sample values of the first component plane of the video frame of the higher-resolution format to a first component plane of a first video frame of a lower-resolution format, the lower-resolution format having a lower resolution than the higher-resolution format;', 'assigning at least some of the sample values of the second and third component planes of the video frame of the higher-resolution format to second and third component planes of the first video frame of the lower-resolution format; and, 'non-volatile memory and/or storage, the non-volatile memory and/or storage having stored therein computer-executable instructions for causing the computing device, when programmed thereby, to perform operations comprisingassigning at least some of the sample values of the second and third component planes of the video frame of the ...

Подробнее
22-04-2021 дата публикации

ORGANIC LUMINESCENT MATERIAL HAVING AN ANCILLARY LIGAND WITH A PARTIALLY FLUORINE-SUBSTITUTED SUBSTITUENT

Номер: US20210115069A1
Принадлежит:

Provided is an organic light-emitting material having an ancillary ligand with partially fluorinated substituents. The organic light-emitting material is a metal complex having a diketone ancillary ligand with partially fluorinated substituents and may be used as a light-emitting material in an organic electroluminescent device. These new types of metal complex can fine-tune the emission wavelength more effectively, reduce voltage, improve efficiency, prolong lifetimes, and provide better device performance. Further provided are an organic electroluminescent device and a compound formulation.

Подробнее
09-05-2017 дата публикации

Video decoding implementations for a graphics processing unit

Номер: US0009648325B2

Video decoding innovations for multithreading implementations and graphics processor unit (“GPU”) implementations are described. For example, for multithreaded decoding, a decoder uses innovations in the areas of layered data structures, picture extent discovery, a picture command queue, and/or task scheduling for multithreading. Or, for a GPU implementation, a decoder uses innovations in the areas of inverse transforms, inverse quantization, fractional interpolation, intra prediction using waves, loop filtering using waves, memory usage and/or performance-adaptive loop filtering. Innovations are also described in the areas of error handling and recovery, determination of neighbor availability for operations such as context modeling and intra prediction, CABAC decoding, computation of collocated information for direct mode macroblocks in B slices, reduction of memory consumption, implementation of trick play modes, and picture dropping for quality adjustment.

Подробнее
20-08-2015 дата публикации

MULTI-THREADED IMPLEMENTATIONS OF DEBLOCK FILTERING

Номер: US20150237381A1
Принадлежит: Microsoft Technology Licensing, LLC

Multi-threaded implementations of deblock filtering improve encoding and/or decoding efficiency. For example, a video encoder or decoder partitions a video picture into multiple segments. The encoder/decoder selects between multiple different patterns for splitting operations of deblock filtering into multiple passes. The encoder/decoder organizes the deblock filtering as multiple tasks, where a given task includes the operations of one of the passes for one of the segments. The encoder/decoder then performs the tasks with multiple threads. The performance of the tasks is constrained by task dependencies which, in general, are based at least in part on which lines of the picture are in the respective segments and which deblock filtering operations are in the respective passes. The task dependencies can include a cross-pass, cross-segment dependency between a given pass of a given segment and an adjacent pass of an adjacent segment. 120-. (canceled)21. A computer system comprising:memory configured to store a video picture; and partition the video picture into multiple segments for deblock filtering whose operations are split into multiple passes, each of the multiple passes including different operations, among the operations of the deblock filtering, that are to be performed on a per pass basis across blocks and/or sub-blocks of a given segment of the multiple segments;', 'organize the deblock filtering for the video picture as multiple tasks, wherein a given task of the multiple tasks includes the operations of one of the multiple passes for one of the multiple segments; and', 'perform the multiple tasks with multiple threads, wherein the performance of the multiple tasks is constrained by task dependencies that include a cross-pass, cross-segment dependency between a given pass for the given segment and an adjacent pass for an adjacent segment of the multiple segments, the adjacent pass including different operations than the given pass, and wherein the ...

Подробнее
05-05-2016 дата публикации

SINGLE-PASS/SINGLE COPY NETWORK ABSTRACTION LAYER UNIT PARSER

Номер: US20160127518A1
Автор: Ziyad Ibrahim, Yongjun Wu
Принадлежит:

Technologies for a single-pass/single copy network abstraction layer unit (“NALU”) parser. Such a NALU parser typically reuses source and/or destination buffers, optionally changes endianess of NALU data, optionally processes emulation prevention codes, and optionally processes parameters in slice NALUs, all as part of a single pass/single copy process. The disclosed NALU parser technologies are further suitable for hardware implementation, software implementation, or any combination of the two. 1. A method performed on a computing device that includes at least one processor and memory , the method comprising:sequentially reading, by the computing device, one data unit at a time from at least one source buffer into a code field that is separate from the at least one source buffer and a destination buffer, where each one data unit is read only one time during the sequentially reading;finding, in the code field during the sequentially reading, a first code in the at least one source buffer and a second code in the at least one source buffer, where the first code and the second code are each more than one data unit in size; andsequentially copying data units from the at least one source buffer into the destination buffer, where the copying begins at a data unit corresponding to the first code in the at least one source buffer and ends with a data unit corresponding to the second code in the at least one source buffer, where, upon completion of the sequentially copying, the destination buffer contains one complete network abstraction layer unit (“NALU”), and where, upon completion of the sequentially copying, each copied data unit was copied only one time from the at least one source buffer to the destination buffer.2. The method of where the sequentially copying is only performed in response to previously finding a NALU start code.3. The method of where the sequentially copying skips copying each added value from the at least one source buffer of any found codes that ...

Подробнее
21-06-2012 дата публикации

AUTO-REGRESSIVE EDGE-DIRECTED INTERPOLATION WITH BACKWARD PROJECTION CONSTRAINT

Номер: US20120155550A1
Принадлежит: Microsoft Corporation

Techniques and tools for interpolation of image/video content are described. For example, a tool such as a display processing module in a computing device receives pixel values of a low-resolution picture and determines an interpolated pixel value between a set of the pixel values from the low-resolution picture. The tool uses auto-regressive edge-directed interpolation that incorporates a backward projection constraint (AR-EDIBC). As part of the AR-EDIBC, the tool can compute auto-regressive (AR) coefficients then apply the AR coefficients to the set of pixel values to determine the interpolated pixel value. For the backward projection constraint, the tool accounts for effects of projecting interpolated pixel values back to the pixel values of the low-resolution picture. The tool stores the interpolated pixel values and pixel values from the low-resolution picture as part of a high-resolution picture. The tool can adaptively use AR-EDIBC depending on content and other factors. 1. A computer-implemented method of pixel value interpolation , the method comprising:receiving pixel values of a picture;determining an interpolated pixel value between a set of the pixel values of the picture, wherein the determination uses auto-regressive edge-directed interpolation that incorporates a backward projection constraint; andstoring the interpolated pixel value.2. The computer-implemented method of wherein the picture is a low-resolution reconstructed picture from a video decoder claim 1 , wherein the storing the interpolated pixel value stores the interpolated pixel value as part of a high-resolution picture in memory claim 1 , and wherein the high-resolution picture also includes the pixel values of the low-resolution reconstructed picture.3. The computer-implemented method of wherein the auto-regressive edge-directed interpolation that incorporates the backward projection constraint includes:computing auto-regressive coefficients from those of the pixel values of the picture ...

Подробнее
25-12-2014 дата публикации

Picture Referencing Control for Video Decoding Using a Graphics Processor

Номер: US20140376641A1
Принадлежит:

A video decoder obtains a first set of picture buffering parameters associated with a current picture of an encoded video bitstream. The first set of picture buffering parameters identifies a set of one or more reference pictures for use in decoding the current picture by a graphics processor. The video decoder revises the first set of picture buffering parameters into a second (different) set of picture buffering parameters for use in decoding the current picture by the graphics processor. The second set of picture buffering parameters is transferred to the graphics processor for decoding the current picture. 1. A method comprising:obtaining a first set of picture buffering parameters associated with a current picture of an encoded video bitstream, the first set of picture buffering parameters identifying a set of one or more reference pictures for use in decoding the current picture by a graphics processor;revising the first set of picture buffering parameters into a second set of picture buffering parameters for use in decoding the current picture by the graphics processor, the second set of picture buffering parameters identifying a different set of one or more reference pictures than the first set of picture buffering parameters; andtransferring the second set of picture buffering parameters to the graphics processor for decoding the current picture.2. The method of wherein obtaining the first set of picture buffering parameters comprises:receiving the encoded video bitstream at a decoder; andextracting the first set of picture buffering parameters from the encoded video bitstream.3. The method of wherein revising the first set of picture buffering parameters comprises:replacing a first set of picture buffering parameters indicated to be used for decoding the entire current picture with a second set of picture buffering parameters indicated to be used for decoding the entire current picture, the second set of picture buffering parameters referencing at least ...

Подробнее
19-01-2023 дата публикации

SUPPLEMENTAL ENHANCEMENT INFORMATION INCLUDING CONFIDENCE LEVEL AND MIXED CONTENT INFORMATION

Номер: US20230017315A1
Принадлежит: Microsoft Technology Licensing, LLC

This application relates to video encoding and decoding, and specifically to tools and techniques for using and providing supplemental enhancement information in bitstreams. Among other things, the detailed description presents innovations for bitstreams having supplemental enhancement information (SEI). In particular embodiments, the SEI message includes picture source data (e.g., data indicating whether the associated picture is a progressive scan picture or an interlaced scan picture and/or data indicating whether the associated picture is a duplicate picture). The SEI message can also express a confidence level of the encoder's relative confidence in the accuracy of this picture source data. A decoder can use the confidence level indication to determine whether the decoder should separately identify the picture as progressive or interlaced and/or a duplicate picture or honor the picture source scanning information in the SEI as it is.

Подробнее
09-06-2022 дата публикации

VIRTUAL PRODUCT PLACEMENT

Номер: US20220180898A1
Принадлежит:

Techniques are described for automating virtual placements in video content. 1. A method , comprising:analyzing frames of video content to identify attributes, the attributes including, for a first clip of the video content, a categorization of a first object represented in the first clip;identifying a first candidate placement using the attributes, the first candidate placement corresponding to a surface of the first object represented in the first clip;generating a plurality of replacement clips using the first clip and a plurality of secondary content items, each replacement clip including a representation of a corresponding secondary content item of the plurality of secondary content items positioned at the first candidate placement;processing a request to initiate playback of the video content, the request being received from a client device;identifying an account profile associated with the client device;identifying metadata marking a first candidate insertion point within the video content, the first candidate insertion point corresponding to a start time of the first clip;selecting a first replacement clip of the plurality of replacement clips based, at least in part, on the account profile; andproviding a manifest to the client device, the manifest comprising first manifest data including references to fragments corresponding to a first subset of a plurality of segments of the video content and second manifest data including references to a set of replacement fragments corresponding to the selected first replacement clip, the set of replacement fragments corresponding to a second subset of the plurality of segments of the video content.2. The method of claim 1 , further comprising:generating a plurality of scores based, at least in part on the attributes, each score corresponding to a candidate placement of a plurality of candidate placements including the first candidate placement, each candidate placement corresponding to a surface of an object ...

Подробнее
08-10-2009 дата публикации

ADAPTIVE ERROR DETECTION FOR MPEG-2 ERROR CONCEALMENT

Номер: US20090252233A1
Принадлежит: Microsoft Corporation

A decoder which can detect errors in MPEG-2 coefficient blocks can identify syntactically-correct blocks which have out-of-bounds coefficients. The decoder computes coefficient bounds based on quantization scalers and quantization matrices and compares these to coefficient blocks during decoding; if a block has out-of-bounds coefficients, concealment is performed on the block. In a decoder implemented all in software, coefficient bounds checking is performed on iDCT coefficients against upper and lower bounds in a spatial domain. In a decoder which performs iDCT in hardware, DCT coefficients are compared to an upper energy bound.

Подробнее
02-05-2013 дата публикации

IMPLEMENTING CHANNEL START AND FILE SEEK FOR DECODER

Номер: US20130108248A1
Принадлежит: MICROSOFT CORPORATION

A video bit stream with pictures comprising inter-coded content can be decoded upon receiving a channel start or file seek instruction. Pictures for beginning decoding and display of the bit stream can be selected based at least in part on one or more tuning parameters that set a preference between a latency of beginning to display video and possible defects in the displayed video. In some embodiments, to implement decoding upon a channel start or file seek, one or more types of data are generated for one or more pictures. For example, picture order counts are generated for pictures after a channel start or file seek operation. As another example, a decoder generates a frame number value that triggers re-initialization of a reference picture buffer before decoding after a channel start or file seek operation. 19.-. (canceled)10. One or more computer-readable storage media containing instructions which , when executed by a processor , cause the processor to perform a method of video playback upon a channel start or file seek , the method comprising:receiving an instruction to perform a channel start or file seek for a bit stream of encoded video data;based at least in part on the instruction to perform the channel start or file seek, generating a frame identifier value that results in a gap between frame identifier values; generating substitute data for one or more reference pictures; and', 'marking the one or more reference pictures as non-existent for purposes of reference picture management; and, 'upon detection of the gap between frame identifier valuesdecoding plural pictures after the channel start or file seek using at least part of the bit stream of encoded video data.11. The one or more computer-readable storage media of claim 10 , wherein the frame identifier value is an invalid frame number claim 10 , and wherein the gap in identifier values is a gap in frame numbers claim 10 , and wherein the substitute data comprise sample values for the one or more ...

Подробнее
10-02-2005 дата публикации

Method of pilot-tone signal transmission on an optical fiber and a system thereof

Номер: US20050031342A1
Автор: Yongjun Wu, Xingyue Sha
Принадлежит:

The invention discloses a pilot-tone signal transmission method and a system thereof. The method includes that at transmitting end, converting physical characteristics of an original pilot-tone signal, and then transmitting the converted pilot-tone signal on an optical fiber; at receiving end, anti-converting physical characteristics of the pilot-tone signal extracted from the optical fiber to recover to the said original pilot-tone signal. The system includes a source device, a target device, an electro-optical converter, optical fibers, an optic-electronic converter, a signal-extracting device, a signal-converting device and a signal-anti-converting device. With the above technical scheme, the invention overcomes carrier/noise ratio limitation, provides better SN ratio performance, and can effectively recover the pilot-tone signal to its original form even the SN ratio condition is worse.

Подробнее
27-08-2019 дата публикации

Combining encoded video streams

Номер: US0010397518B1
Принадлежит: Amazon Technologies, Inc., AMAZON TECH INC

Techniques are described by which multiple, independently encoded video streams may be combined into a single decodable video stream. These techniques take advantage of existing features of commonly used video codecs that support the independent encoding of different regions of an image frame (e.g., H.264 slices or HEVC tiles). Instead of including different parts of the same image, each region corresponds to the encoded image data of the frames of one of the independent video streams.

Подробнее
26-01-2010 дата публикации

Overlapped block motion compression for variable size blocks in the context of MCTF scalable video coders

Номер: US0007653133B2

A method, computer program product, and computer system for processing video frames. A current frame is divided into M blocks that include at least two differently sized blocks. M is at least 9. Each block in the current frame is classified as being a motion block or an I-BLOCK. Overlapped block motion compensation (OBMC) is performed on each block of the M blocks according to a predetermined scan order. The block on which OBMC is being performed is denoted as a self block. The OBMC is performed on the self block with respect to its neighbor blocks. The neighbor blocks consist of nearest neighbor blocks of the self block. Performing OBMC on the self block includes generating a weighting window for the self block and for each of its neighbor blocks.

Подробнее
11-09-2018 дата публикации

Custom data indicating nominal range of samples of media content

Номер: US0010075748B2

A media processing tool adds custom data to an elementary media bitstream or media container. The custom data indicates nominal range of samples of media content, but the meaning of the custom data is not defined in the codec format or media container format. For example, the custom data indicates the nominal range is full range or limited range. For playback, a media processing tool parses the custom data and determines an indication of media content type. A rendering engine performs color conversion operations whose logic changes based at least in part on the media content type. In this way, a codec format or media container format can in effect be extended to support full nominal range media content as well as limited nominal range media content, and hence preserve full or correct color fidelity, while maintaining backward compatibility and conformance with the codec format or media container format.

Подробнее
31-10-2017 дата публикации

Metadata assisted video decoding

Номер: US0009807409B2

A video decoder is disclosed that uses metadata in order to make optimization decisions. In one embodiment, metadata is used to choose which of multiple available decoder engines should receive a video sequence. In another embodiment, the optimization decisions can be based on length and location metadata information associated with a video sequence. Using such metadata information, a decoder engine can skip start-code scanning to make the decoding process more efficient. Also based on the choice of decoder engine, it can decide whether emulation prevention byte removal shall happen together with start code scanning or not.

Подробнее
26-04-2012 дата публикации

METHOD AND APPARATUS FOR SCALABLE MOTION VECTOR CODING

Номер: US20120099652A1
Автор: WOODS John W., Wu Yongjun
Принадлежит: SAMSUNG ELECTRONICS CO., LTD.

A method and apparatus for scalable coding of a motion vector generated during motion estimation, in which a generated motion vector field is separated into a base layer and an enhancement layer according to pixel accuracies to obtain a layered structure for a motion vector. In addition, the motion vector field has a layered structure including a base layer composed of motion vectors of blocks larger than or equal to a predetermined size and at least one enhancement layer composed of motion vectors of blocks smaller than a predetermined size. 1. A scalable motion vector coding method comprising:(a) dividing a current frame into a plurality of blocks and performing motion estimation to determine a motion vector for each of the divided blocks;(b) forming a base layer including motion vectors of blocks larger than or equal to a predetermined size and at least one enhancement layer including motion vectors of blocks smaller than the predetermined size, using the motion vectors of the divided blocks; and(c) coding the base layer and the enhancement layer.2. The scalable motion vector coding method of claim 1 , wherein in (a) claim 1 , a motion vector for each of the divided blocks is determined using hierarchical variable size block matching.3. The scalable motion vector coding method of claim 1 , wherein in (b) claim 1 , the base layer is formed by assigning a representative motion vector selected from motion vectors of adjacent small blocks smaller than the predetermined size among the divided blocks to a larger block formed by merging the small blocks and the enhancement layer is formed of the motion vectors of the merged small blocks.4. The scalable motion vector coding method of claim 3 , wherein the representative motion vector is the first motion vector having a same type as the motion vector of the larger block among the motion vectors of the small blocks scanned according to a predetermined scan order.5. The scalable motion vector coding method of claim 4 , ...

Подробнее
17-05-2012 дата публикации

BITSTREAM MANIPULATION AND VERIFICATION OF ENCODED DIGITAL MEDIA DATA

Номер: US20120121025A1
Принадлежит: MICROSOFT CORPORATION

Disclosed herein are representative embodiments of methods, apparatus, and systems for manipulating bitstreams of digital media data compressed according to a compression standard. Also disclosed are representative embodiments of methods, apparatus, and systems for evaluating compliance of an encoded bitstream of digital media data with a compression standard. In one exemplary embodiment, a conforming bitstream of compressed digital media data is input. One or more of the parameters in the bitstream are selectively altered into parameters that do not conform to the video compression standard. The selective alteration can be performed such that parameters that would make the bitstream non-decodable if altered are bypassed and left unaltered. A non-conforming bitstream that includes the one or more selectively altered parameters is output. 1. A method , comprising:inputting a conforming bitstream of encoded digital media data, the conforming bitstream being arranged into a syntax that conforms to a video compression standard, the conforming bitstream further comprising parameters that conform to the video compression standard;selectively altering one or more of the parameters in the bitstream into parameters that do not conform to the video compression standard, the selective altering being performed such that parameters that would render the bitstream non-decodable if altered are bypassed and left unaltered; andoutputting a non-conforming bitstream of encoded digital media data, the non-conforming bitstream including the one or more selectively altered parameters.2. The method of claim 1 , further comprising parsing the conforming bitstream into one or more data structures claim 1 , at least one of the data structures corresponding to a header and associated parameter data from the conforming bitstream.3. The method of claim 1 , wherein the method further comprises receiving user input indicative of the parameters that are to be selectively altered claim 1 , and ...

Подробнее
07-06-2012 дата публикации

Fixing structure of a faucet and an operating method thereof

Номер: US20120137427A1
Принадлежит: GLOBE UNION INDUSTRIAL CORP

A fixing structure of a faucet fixed on a support plate with an opening and contains the faucet including a housing having a mouth and a through aperture; the faucet also including an inlet pipe unit; a locking member being operated to move between an engaging position and a disengaging position along the through aperture; a positioning device including a fitting seat having a bottom face, a channel defined therein to receive the inlet pipe unit of the faucet, at least one slot disposed along an outer surface thereof to slide the locking member located at the engaging position, the slot including at least one tooth and at least one retaining recess such that the locking member passes through the tooth to be retained in the retaining recess and is limited by the tooth to move so that the mouth is fixed to the fitting seat.

Подробнее
21-06-2012 дата публикации

STEREO 3D VIDEO SUPPORT IN COMPUTING DEVICES

Номер: US20120154526A1
Принадлежит: MICROSOFT CORPORATION

Methods are disclosed for supporting stereo 3D video in computing devices. A computing device can receive stereo 3D video data employing a YUV color space and chroma subsampling, and can generate anaglyph video data therefrom. The anaglyph video data can be generated by unpacking the stereo 3D video data to left and right views and combining the left and right views into a single view via matrix transformation. The combining uses transform matrices that correspond to a video pipeline configuration. The transform matrix coefficients can depend on characteristics of the video pipeline components. Modified transform matrix coefficients can be used in response to changes in the video pipeline configuration. Video encoded in stereo 3D video data can be selected to be displayed in stereo 3D, anaglyph or monoscopic form, depending on user input and/or characteristics of video pipeline components. 1. A method of displaying anaglyph video in place of stereo 3D video , the method comprising:receiving stereo 3D video data employing a YUV color space; unpacking the stereo 3D video data to a left view and a right view; and', 'combining the left view and the right view into a single view, the anaglyph video data comprising the single view; and, 'using a computing device, generating anaglyph video data from the stereo 3D video data, the generating comprisingdisplaying video encoded in the stereo 3D video data in anaglyph form.2. The method of claim 1 , wherein the combining uses anaglyph transform matrices that correspond to a configuration of a video pipeline.3. The method of claim 2 , wherein the anaglyph transform matrices include anaglyph transform matrix coefficients that depend on characteristics of components within the video pipeline.4. The method of claim 3 , wherein the video encoded in the stereo 3D video data is displayed on a display and at least one of the anaglyph transform matrix coefficients depend on one or more characteristics of the following: the display claim ...

Подробнее
23-08-2012 дата публикации

LOCAL PICTURE IDENTIFIER AND COMPUTATION OF CO-LOCATED INFORMATION

Номер: US20120213286A1
Принадлежит: MICROSOFT CORPORATION

Video decoding innovations for using local picture identifiers and computing co-located information are described. In one aspect, a decoder identifies reference pictures in a reference picture list of a temporal direct prediction mode macroblock that match reference pictures used by a co-located macroblock using local picture identifiers. In another aspect, a decoder determines whether reference pictures used by blocks are the same by comparing local picture identifiers during calculation of boundary strength. In yet another aspect, a decoder determines a picture type of a picture and based on the picture type selectively skips or simplifies computation of co-located information for use in reconstructing direct prediction mode macroblocks outside the picture. 19-. (canceled)10. A computer-implemented method for transforming encoded video information using a video decoder , the method comprising:receiving encoded video information in a bitstream; 'calculating boundary strength values for plural blocks, wherein the calculating comprises determining whether reference pictures used by the plural blocks are the same by comparing local picture identifiers of the reference pictures, wherein the local picture identifiers are assigned to picture structures when allocated, and wherein the decoder reuses the local picture identifiers during the decoding based on availability of the local picture identifiers; and', 'performing loop filtering during decoding the encoded video information, comprisingoutputting the filtered macroblock.11. The method of wherein the local picture identifiers are 8-bit local picture identifiers claim 10 , and wherein the decoder sets the local picture identifiers independent of picture order count.12. The method of wherein the local picture identifiers are 5-bit local picture identifiers.13. The method of wherein the local picture identifiers are greater than or equal to 5-bits claim 10 , and less than or equal to 32-bits claim 10 , and wherein the ...

Подробнее
28-02-2013 дата публикации

MEMORY MANAGEMENT FOR VIDEO DECODING

Номер: US20130051478A1
Автор: Sadhwani Shyam, Wu Yongjun
Принадлежит: MICROSOFT CORPORATION

Techniques and tools described herein help manage memory efficiently during video decoding, especially when multiple video clips are concurrently decoded. For example, with clip-adaptive memory usage, a decoder determines first memory usage settings expected to be sufficient for decoding of a video clip. The decoder also determines second memory usage settings known to be sufficient for decoding of the clip. During decoding, memory usage is initially set according to the first settings. Memory usage is adaptively increased during decoding, subject to theoretical limits in the second settings. With adaptive early release of side information, the decoder can release side information memory for a picture earlier than the decoder releases image plane memory for the picture. The decoder can also adapt memory usage for decoded transform coefficients depending on whether the coefficients are for intra-coded blocks or inter-coded blocks, and also exploit the relative sparseness of non-zero coefficient values. 1. In a computing system that implements a video decoder , a method comprising:receiving at least part of a bitstream for a video clip;determining first memory usage settings for decoding of the video clip;determining second memory usage settings different than the first memory usage settings, the second memory usage settings indicating one or more theoretical limits on memory usage according to a standard or format specification for decoding of the video clip; andduring the decoding of the video clip, adapting memory usage based at least in part on the first memory usage settings and the second memory usage settings, wherein the memory usage is initially set according to the first memory usage settings, and wherein the memory usage is increased during the decoding subject to the one or more theoretical limits in the second memory usage settings.2. The method of wherein the first memory usage settings are expected to be sufficient for the decoding of the video clip ...

Подробнее
09-05-2013 дата публикации

SIGNALING OF STATE INFORMATION FOR A DECODED PICTURE BUFFER AND REFERENCE PICTURE LISTS

Номер: US20130114741A1
Принадлежит: MICROSOFT CORPORATION

Innovations for signaling state of a decoded picture buffer (“DPB”) and reference picture lists (“RPLs”). In example implementations, rather than rely on internal state of a decoder to manage and update DPB and RPLs, state information about the DPB and RPLs is explicitly signaled. This permits a decoder to determine which pictures are expected to be available for reference from the signaled state information. For example, an encoder determines state information that identifies which pictures are available for use as reference pictures (optionally considering feedback information from a decoder about which pictures are available). The encoder sets syntax elements that represent the state information. In doing so, the encoder sets identifying information for a long-term reference picture (“LTRP”), where the identifying information is a value of picture order count least significant bits for the LTRB. The encoder then outputs the syntax elements as part of a bitstream. 1. A computing system that implements a video encoder , wherein the computing system is adapted to perform a method comprising:determining state information that identifies which pictures are available for use as reference pictures;setting syntax elements that represent the state information, including setting identifying information for a long-term reference picture (“LTRP”), wherein the identifying information for the LTRP is a value of picture order count least significant bits (“POC LSBs”) for the LTRB; andoutputting the syntax elements as part of a bitstream.2. The computing system of wherein the syntax elements that represent the state information are signaled in the bitstream for a current picture.3. The computing system of wherein the method further comprises:determining whether to include status information about LTRPs in the bitstream for pictures of a sequence; andoutputting, as part of a sequence parameter set, a flag that indicates whether the status information about LTRPs is present in the ...

Подробнее
09-05-2013 дата публикации

CATEGORY-PREFIXED DATA BATCHING OF CODED MEDIA DATA IN MULTIPLE CATEGORIES

Номер: US20130117270A1
Принадлежит: MICROSOFT CORPORATION

Innovations for category-prefixed data batching (“CPDB”) of entropy-coded data or other payload data for coded media data, as well as innovations for corresponding recovery of the entropy-coded data (or other payload data) formatted with CPDB. The CPDB can be used in conjunction with coding/decoding for video content, image content, audio content or another type of content. For example, after receiving coded media data in multiple categories from encoding units, a formatting tool formats payload data with CPDB, generating a batch prefix for a batch of the CPDB-formatted payload data. The batch prefix includes a category identifier and a data quantity indicator. The formatting tool outputs the CPDB-formatted payload data to a bitstream. At the decoder side, a formatting tool receives the CPDB-formatted payload data in a bitstream, recovers the payload data from the CPDB-formatted payload data, and outputs the payload data (e.g., to decoding units). 1. A computing device that implements a formatting tool to facilitate parallel processing of coded media data in multiple categories , wherein the computing device is adapted to perform a method comprising:processing payload data formatted with category-prefixed data batching (“CPDB”), wherein the payload data includes coded media data in multiple categories associated with parallel processing, and wherein a batch prefix for a batch of the CPDB-formatted payload data includes a category identifier (“CI”) and a data quantity indicator (“DQI”); andoutputting results of the processing.2. The computing device of wherein the CPDB-formatted payload data is organized as multiple separated mode batches and a mixed mode batch claim 1 , wherein each of the multiple separated mode batches includes data for a different payload data category among multiple payload data categories claim 1 , and wherein the mixed mode batch includes any remaining data from all of the multiple payload data categories.3. The computing device of wherein ...

Подробнее
18-07-2013 дата публикации

Fixing Structure of a Pull-Out Faucet

Номер: US20130180601A1
Принадлежит:

A fixing structure of a pull-out faucet is mounted on a platform with a fixing hole and contains a pull-out faucet including a housing, a pull-out spray head, a mixing valve, and a pipe line set; the pipe line set including a plurality of fixedly static pipe lines and a movably dynamic pipe line; a positioning device including a seat and a clamping set; the seat being fixed under the platform by the clamping set and being fixed in the fixing hole to fit with the housing, the seat including a passage set for inserting the pipe line set; wherein the passage set has a first passage for inserting the static pipe lines and a second passage for inserting the dynamic pipe line, and the first passage is spaced apart from the second passage so that the dynamic pipe line is limited in the second passage to move smoothly. 1. A fixing structure of a pull-out faucet being mounted on a platform with a fixing hole and comprising:a pull-out faucet including a housing, a pull-out spray head, a mixing valve fixed in the housing, and a pipe line set; the pipe line set including a plurality of fixedly static pipe lines connected to the mixing valve and a movably dynamic pipe line connected between one of the plurality of static pipe lines and the pull-out spray head;a positioning device including a seat and a clamping set; the seat being fixed under the platform by the clamping set and being fixed in the fixing hole of the platform to fit with the housing, the seat including a passage set for inserting the pipe line set; wherein the passage set has a first passage for inserting the plurality of static pipe lines and a second passage for inserting the dynamic pipe line, and the first passage is spaced apart from the second passage so that the dynamic pipe line is limited in the second passage to move smoothly.2. The fixing structure of the pull-out faucet as claimed in claim 1 , wherein the second passage of the seat is defined by an inner space of a tube claim 1 , and the tube is ...

Подробнее
22-08-2013 дата публикации

Metadata assisted video decoding

Номер: US20130215978A1
Принадлежит: Microsoft Corp

A video decoder is disclosed that uses metadata in order to make optimization decisions. In one embodiment, metadata is used to choose which of multiple available decoder engines should receive a video sequence. In another embodiment, the optimization decisions can be based on length and location metadata information associated with a video sequence. Using such metadata information, a decoder engine can skip start-code scanning to make the decoding process more efficient. Also based on the choice of decoder engine, it can decide whether emulation prevention byte removal shall happen together with start code scanning or not.

Подробнее
10-10-2013 дата публикации

JOINT VIDEO STABILIZATION AND ROLLING SHUTTER CORRECTION ON A GENERIC PLATFORM

Номер: US20130265460A1
Принадлежит: MICROSOFT CORPORATION

In one embodiment, a video processing system may filter a video data set to correct skew and wobble using a central processing unit and a graphical processing unit . The video processing system may apply a rolling shutter effect correction filter to an initial version of a video data set. The video processing system may simultaneously apply a video stabilization filter to the initial version to produce a final version video data set. 1. A machine-implemented method , comprising:determining a filtering apportionment between a graphical processing unit and a central processing unit based on a prior filter performance;applying a rolling shutter effect correction filter to an initial version of a video data set; andapplying a video stabilization filter to the initial version to produce a final version of the video data set.2. The method of claim 1 , further comprising:executing a motion estimation on the initial version using the graphical processing unit to create a down sample set.3. The method of claim 1 , further comprising:processing a down sample set of the initial version using the central processor unit to create a motion vector set.4. The method of claim 1 , further comprising:warping an image of the initial version by applying a motion vector set using the graphical processing unit.5. The method of claim 4 , further comprising:adjusting a warping constant on the motion vector set based on a previous iteration.6. The method of claim 1 , further comprising:creating a preview proxy of the video data set using the rolling shutter effect correction filter and the video stabilization filter.7. The method of claim 1 , further comprising:caching a preview proxy set of the video data set.8. The method of claim 1 , further comprising:receiving a user selection of a preview proxy of a preview proxy set; andcreating the final version based on the user selection.9. The method of claim 1 , further comprising:setting a filter parameter for the rolling shutter effect ...

Подробнее
31-10-2013 дата публикации

FRACTIONAL INTERPOLATION FOR HARDWARE-ACCELERATED VIDEO DECODING

Номер: US20130287114A1
Принадлежит:

Video decoding innovations for multithreading implementations and graphics processor unit (“GPU”) implementations are described. For example, for multithreaded decoding, a decoder uses innovations in the areas of layered data structures, picture extent discovery, a picture command queue, and/or task scheduling for multithreading. Or, for a GPU implementation, a decoder uses innovations in the areas of inverse transforms, inverse quantization, fractional interpolation, intra prediction using waves, loop filtering using waves, memory usage and/or performance-adaptive loop filtering. Innovations are also described in the areas of error handling and recovery, determination of neighbor availability for operations such as context modeling and intra prediction, CABAC decoding, computation of collocated information for direct mode macroblocks in B slices, reduction of memory consumption, implementation of trick play modes, and picture dropping for quality adjustment. 120.-. (canceled)21. A method of video decoding using a hardware-accelerated video decoder , the method comprising:classifying plural blocks according to plural motion vector types, wherein the plural motion vector types differ in terms of complexity of sample value interpolation; andwith a graphics processing unit, performing motion compensation operations for the plural blocks in plural passes corresponding to the plural motion vector types, respectively.22. The method of wherein each of the plural motion vector types is associated with a quantum of work for the motion vector type claim 21 , and wherein the quantum of work for each of the plural motion vector types is 8×8 block.23. The method of wherein plural motion vectors for the plural blocks are applied for 4×4 blocks in the motion compensation operations.24. The method of wherein the plural motion vector types are:an integer motion vector type that represents motion vectors with offsets at integer sample positions;a center offset motion vector type that ...

Подробнее
10-04-2014 дата публикации

REDUCING MEMORY CONSUMPTION DURING VIDEO DECODING

Номер: US20140098887A1
Принадлежит: MICROSOFT CORPORATION

Video decoding innovations for multithreading implementations and graphics processor unit (“GPU”) implementations are described. For example, for multithreaded decoding, a decoder uses innovations in the areas of layered data structures, picture extent discovery, a picture command queue, and/or task scheduling for multithreading. Or, for a GPU implementation, a decoder uses innovations in the areas of inverse transforms, inverse quantization, fractional interpolation, intra prediction using waves, loop filtering using waves, memory usage and/or performance-adaptive loop filtering. Innovations are also described in the areas of error handling and recovery, determination of neighbor availability for operations such as context modeling and intra prediction, CABAC decoding, computation of collocated information for direct mode macroblocks in B slices, reduction of memory consumption, implementation of trick play modes, and picture dropping for quality adjustment. 120.-. (canceled)21. A method of video decoding using a video decoder , the method comprising:with the video decoder, entropy decoding plural encoded transform coefficients; andwith the video decoder, packing at least some of the decoded transform coefficients in one or more data structures, wherein the packing includes representing an individual decoded transform coefficient as a single multi-bit value including a block position and a coefficient level value packed together.2221. The method of wherein the one or more data structures include a buffer fragment storing one or more multi-bit values for respective non-zero coefficient values among the at least some of the decoded transform coefficients , the one or more multi-bit values including the single multi-bit value representing the individual decoded transform coefficient.23. The method of further comprising:with the video decoder, dynamically adding a new buffer fragment as needed to store more of the decoded transform coefficients, wherein the new buffer ...

Подробнее
10-04-2014 дата публикации

Neighbor determination in video decoding

Номер: US20140098890A1
Принадлежит: Microsoft Corp

Video decoding innovations for multithreading implementations and graphics processor unit (“GPU”) implementations are described. For example, for multithreaded decoding, a decoder uses innovations in the areas of layered data structures, picture extent discovery, a picture command queue, and/or task scheduling for multithreading. Or, for a GPU implementation, a decoder uses innovations in the areas of inverse transforms, inverse quantization, fractional interpolation, intra prediction using waves, loop filtering using waves, memory usage and/or performance-adaptive loop filtering. Innovations are also described in the areas of error handling and recovery, determination of neighbor availability for operations such as context modeling and intra prediction, CABAC decoding, computation of collocated information for direct mode macroblocks in B slices, reduction of memory consumption, implementation of trick play modes, and picture dropping for quality adjustment.

Подробнее
12-01-2017 дата публикации

INTRA-REFRESH FOR VIDEO STREAMING

Номер: US20170013274A1
Принадлежит:

Embodiments relate to encoding and decoding frames of a video stream. Video frames are encoded as intra-coded frames (Iframes) and predictive coded frames (P/Bframes) and transmitted. When a receiver of the encoded frames is unable to decode a frame, due to transmission problems or otherwise, the encoded video stream can be recovered without requiring a full Iframe to be generated at one time. Instead, intra-coded data is provided by the transmitter in slices. Specifically, frames with only portions of intra-coded data (Islices) are transmitted in sequence until enough intra-coded data is provided to the receiver to recover a frame and resume decoding. The intra-refresh frames may also contain slices predictively encoded (Pslices) based on restricted search spaces of preceding intra-refresh frames. 1. A method encoding recovery performed by a first computing device that is transmitting a video stream to a second computing device , the method comprising:intra-frame encoding a frame of the video stream to generate an Iframe and transmitting the Iframe to the second computing device;inter-frame encoding a plurality of frames to generated Pframes, a first of the Pframes encoded from the Iframe, and a second of the Pframes encoded based on first Pframe;receiving an indication from the second computing device that the second Pframe was not properly received or decodable by the second computing device; andresponsive to the indication, encoding and transmitting a sequence of intra-refresh frames, each intra-refresh frame comprising a single intra-refresh slice (Islice).2. A method according to claim 1 , wherein the intra-refresh frames are configured such that if the second computing device receives each of the intra-refresh frames a full frame is guaranteed to be recoverable from the intra-refresh frames.3. A method according to claim 1 , wherein a second intra-refresh frame is encoded immediately after a first intra-refresh frame claim 1 , and wherein the method further ...

Подробнее
12-01-2017 дата публикации

CUSTOM DATA INDICATING NOMINAL RANGE OF SAMPLES OF MEDIA CONTENT

Номер: US20170013286A1
Принадлежит: Microsoft Technology Licensing, LLC

A media processing tool adds custom data to an elementary media bitstream or media container. The custom data indicates nominal range of samples of media content, but the meaning of the custom data is not defined in the codec format or media container format. For example, the custom data indicates the nominal range is full range or limited range. For playback, a media processing tool parses the custom data and determines an indication of media content type. A rendering engine performs color conversion operations whose logic changes based at least in part on the media content type. In this way, a codec format or media container format can in effect be extended to support full nominal range media content as well as limited nominal range media content, and hence preserve full or correct color fidelity, while maintaining backward compatibility and conformance with the codec format or media container format. 120.-. (canceled)21. One or more computer-readable media having stored thereon computer-executable instructions for causing a computing system , when programmed thereby , to perform video processing operations , wherein the one or more computer-readable media are selected from the group consisting of volatile memory , non-volatile memory , magnetic disk , CD-ROM , and DVD , the video processing operations comprising: [{'sup': 'n', 'full range characterized by values from 0 . . . 2−1 for samples of bit depth n; and'}, 'a limited range characterized by values in a sub-range of the full range; and, 'determining range data for encoded video content, wherein the range data indicates nominal range of samples of the encoded video content as video content type for input video to encoding, the samples of the encoded video content having a sample depth that indicates an available range of values of the samples of the encoded video content, wherein the nominal range is a range of values, within the available range for the sample depth of the samples of the encoded video content ...

Подробнее
11-01-2018 дата публикации

SYNTAX STRUCTURES INDICATING COMPLETION OF CODED REGIONS

Номер: US20180014033A1
Принадлежит: Microsoft Technology Licensing, LLC

Syntax structures that indicate the completion of coded regions of pictures are described. For example, a syntax structure in an elementary bitstream indicates the completion of a coded region of a picture. The syntax structure can be a type of network abstraction layer unit, a type of supplemental enhancement information message or another syntax structure. For example, a media processing tool such as an encoder can detect completion of a coded region of a picture, then output, in a predefined order in an elementary bitstream, syntax structure(s) that contain the coded region as well as a different syntax structure that indicates the completion of the coded region. Another media processing tool such as a decoder can receive, in a predefined order in an elementary bitstream, syntax structure(s) that contain a coded region of a picture as well as a different syntax structure that indicates the completion of the coded region. 1. A computing system including:a buffer configured to store, as part of an elementary bitstream, one or more syntax structures that contain a coded region for a region of an image or video, and, after the one or more syntax structures that contain the coded region, a different syntax structure that indicates completion of the coded region, the different syntax structure including a next slice segment address that indicates a slice segment address for a next slice segment header when the slice segment address for the next slice segment header is present in the elementary bitstream; anda media processing tool configured to detect the completion of the coded region using the different syntax structure.2. The computing system of claim 1 , wherein the media processing tool is further configured to:decode the coded region to reconstruct the region.3. (canceled)4. The computing system of claim 1 , wherein the different syntax structure has a type that designates the different syntax structure as an end-of-region indicator.5. The computing system of ...

Подробнее
17-04-2014 дата публикации

OPERATING METHOD AND OPERATING SYSTEM FOR A PCB DRILLING-MILLING MACHINE USING DIFFERENT MOTION CONTROL PRODUCTS

Номер: US20140107847A1
Принадлежит:

The invention discloses an operating method and system for a PCB drilling-milling machine using different motion control products. The method comprises that a PCB drilling-milling machine runs a control module of a motion control product by running a drilling-milling module. The operating method further comprises: storing a control module of at least one motion control product in the PCB drilling-milling machine; inputting a selection information which is a code of a desired motion control product while running the drilling-milling module; and, the PCB drilling-milling machine matching the selection information with the code(s) of all the motion control product(s) stored during the step 1, and running the control module of the motion control product matched with the selection information. The invention makes the drilling-milling machine compatible with different motion control products via one drilling-milling module. It is convenient to update and replace hardware(s) of drilling-milling machines and reduce the development cycle. 1. An operating method for a PCB drilling-milling machine using different motion control products , comprising a step that a PCB drilling-milling machine runs a control module of a motion control product by running a drilling-milling module , wherein the method further comprises:step 1, storing a control module of at least one motion control product in the PCB drilling-milling machine;step 2, inputting a selection information which is a code of a desired motion control product, while running the drilling-milling module; andstep 3, matching the selection information with the code(s) of all of the motion control product(s) stored during the step 1, and running the control module of the motion control product matched with the selection information by the PCB drilling-milling machine.2. The operating method in accordance with claim 1 , wherein the motion control product comprises a motor.3. The operating method in accordance with claim 1 , ...

Подробнее
24-04-2014 дата публикации

BAND SEPARATION FILTERING / INVERSE FILTERING FOR FRAME PACKING / UNPACKING HIGHER-RESOLUTION CHROMA SAMPLING FORMATS

Номер: US20140112394A1
Принадлежит: MICROSOFT CORPORATION

When packing a video frame of a higher-resolution chroma sampling format such as YUV 4:4:4 into frames of a lower-resolution chroma sampling format such as YUV 4:2:0, a computing device performs wavelet decomposition (or other band separation filtering) on sample values of chroma components of the higher-resolution frame, producing sample values of multiple bands. The device assigns the sample values of the bands to parts of the lower-resolution frames. During corresponding unpacking operations, a computing device assigns parts of the frames of the lower-resolution chroma sampling format to sample values of multiple bands. The device performs wavelet reconstruction (or other inverse band separation filtering) on the sample values of the bands, producing sample values of chroma components of the frame of the higher-resolution chroma sampling format. Band separation filtering can help improve quality of reconstruction when distortion has been introduced during encoding of the chroma components packed into low-resolution frames. 1. A method comprising: performing band separation filtering on sample values of chroma components of the one or more frames of the higher-resolution chroma sampling format to produce sample values of plural bands; and', 'assigning the sample values of the plural bands to parts of the one or more frames of the lower-resolution chroma sampling format., 'packing one or more frames of a higher-resolution chroma sampling format into one or more frames of a lower-resolution chroma sampling format, wherein the packing includes2. The method of further comprising claim 1 , after the packing:encoding the one or more frames of the lower-resolution chroma sampling format.3. The method of wherein the band separation filtering is a three-band wavelet decomposition or four-band wavelet decomposition.4. The method of wherein the band separation filtering uses a filter pair with a lowpass filter (“LPF”) and a highpass filter (“HPF”) claim 1 , and wherein the ...

Подробнее
05-02-2015 дата публикации

Generic platform video image stabilization

Номер: US20150036010A1
Принадлежит: Microsoft Corp

Video image stabilization provides better performance on a generic platform for computing devices by evaluating available multimedia digital signal processing components, and selecting the available components to utilize according to a hierarchy structure for video stabilization performance for processing parts of the video stabilization. The video stabilization has improved motion vector estimation that employs refinement motion vector searching according to a pyramid block structure relationship starting from a downsampled resolution version of the video frames. The video stabilization also improves global motion transform estimation by performing a random sample consensus approach for processing the local motion vectors, and selection criteria for motion vector reliability. The video stabilization achieves the removal of hand shakiness smoothly by real-time one-pass or off-line two-pass temporal smoothing with error detection and correction.

Подробнее
07-02-2019 дата публикации

CUSTOM DATA INDICATING NOMINAL RANGE OF SAMPLES OF MEDIA CONTENT

Номер: US20190045237A1
Принадлежит: Microsoft Technology Licensing, LLC

A media processing tool adds custom data to an elementary media bitstream or media container. The custom data indicates nominal range of samples of media content, but the meaning of the custom data is not defined in the codec format or media container format. For example, the custom data indicates the nominal range is full range or limited range. For playback, a media processing tool parses the custom data and determines an indication of media content type. A rendering engine performs color conversion operations whose logic changes based at least in part on the media content type. In this way, a codec format or media container format can in effect be extended to support full nominal range media content as well as limited nominal range media content, and hence preserve full or correct color fidelity, while maintaining backward compatibility and conformance with the codec format or media container format. 120-. (canceled)21. In a computer system that implements a video processing tool , a method comprising: [{'sup': 'n', 'full range characterized by values from 0 . . . 2−1 for samples of bit depth n; and'}, 'a limited range characterized by values in less than the full range;, 'receiving range data and encoded video content in a first format, wherein the range data indicates nominal range of samples of the encoded video content, the samples of the encoded video content having a sample depth that indicates an available range of values of the samples of the encoded video content, wherein the nominal range is a range of values within the available range for the sample depth of the samples of the encoded video content, and wherein the range data indicates one of multiple possible options for the nominal range, the multiple possible options for the nominal range includingparsing the range data;decoding the encoded video content, thereby producing samples of reconstructed video output in the first format; andconverting the samples of the reconstructed video output from the ...

Подробнее
16-02-2017 дата публикации

PROCESSING ENCODED BITSTREAMS TO IMPROVE MEMORY UTILIZATION

Номер: US20170048532A1
Автор: Sadhwani Shyam, Wu Yongjun
Принадлежит:

An encoded bitstream of video data can include layers of encoded video data. Such layers can be removed by a device in response to, for example, available bandwidth or device capabilities. The encoded bitstream also includes values for reference count parameters that are used by a video decoder to allocate memory when decoding the video data. If layers of the encoded video data are removed from the encoded bitstream, the values for these reference count parameters are modified. By modifying the values of these parameters, the video decoder allocates a different amount of memory and memory utilization is improved. Such modifications can be made by processing the encoded bitstream without re-encoding the encoded video data. 1. A video processing system , comprising:an input configured to receive an initial encoded bitstream comprising encoded video data and values for reference count parameters into memory, the encoded video data comprising a plurality of layers;a bitstream processor configured to remove encoded video data for one or more of the plurality of layers from the initial encoded bitstream and to modify a value of at least one reference count parameter in the initial encoded bitstream, to provide a modified reduced encoded bitstream;an output configured to provide the modified reduced encoded bitstream.2. The video processing system of claim 1 , wherein the reference count parameter comprises an indication of a number of reference frames.3. The video processing system of claim 1 , wherein the reference count parameter comprises an indication of a number of buffering frames.4. The video processing system of claim 1 , wherein the bitstream processor is further configured to remove prefix network access layer units related to a base layer if all other layers have been removed.5. The video processing system of claim 1 , further comprising a video decoder configured to allocate memory based at least on the modified value of the reference count parameter.6. The ...

Подробнее
05-03-2015 дата публикации

AUDIO VIDEO PLAYBACK SYNCHRONIZATION FOR ENCODED MEDIA

Номер: US20150062353A1
Принадлежит:

Techniques are described for inserting encoded markers into encoded audio-video content. For example, encoded audio-video content can be received and corresponding encoded audio and video markers can be inserted. The encoded audio and video markers can be inserted without changing the overall duration of the encoded audio and video streams and without changing most or all of the properties of the encoded audio and video streams. Corresponding encoded audio and video markers can be inserted at multiple locations (e.g., sync locations) in the encoded audio and video streams. Audio-video synchronization testing can be performed using encoded audio-video content with inserted encoded audio-video markers. 1. A method , implemented at least in part by a computing device , for inserting encoded markers into encoded audio-video content , the method comprising:receiving, by the computing device, encoded audio-video content comprising an encoded video stream and an encoded audio stream;inserting, by the computing device, an encoded video marker into the encoded video stream at a video sync location, wherein the encoded video marker is inserted without decoding or re-encoding the encoded video stream;inserting, by the computing device, an encoded audio marker into the encoded audio stream at an audio sync location corresponding to the video sync location, wherein the encoded audio marker is inserted without decoding or re-encoding the encoded audio stream; andoutputting, by the computing device, the encoded video stream with the inserted encoded video marker and the encoded audio stream with the inserted encoded audio marker.2. The method of wherein the receiving comprises:de-multiplexing the encoded audio-video content to produce the encoded video stream and the encoded audio stream.3. The method of wherein the outputting comprises:re-multiplexing the encoded video stream with the inserted encoded video marker and the encoded audio stream with the inserted encoded audio ...

Подробнее
02-03-2017 дата публикации

PARALLEL PROCESSING OF A VIDEO FRAME

Номер: US20170064320A1
Принадлежит:

A graphics pipeline with components that process frames by portions (e.g., pixels or rows) or slices to reduce end-to-end latency. Components of a pipeline process portions of a same frame at the same time. For example, as graphics data for a frame is being generated and fills a framebuffer, once a certain portion of video data less than the whole frame (slice or sub-frame) becomes available, before the corresponding frame is finished filling the framebuffer, the next pipeline component after the framebuffer, for instance a video processor for color conversion or an encoder, begins to process the portion of the frame. While one portion of a frame is accumulating in the frame buffer, another portion of the same frame is being encoded by an encoder, and another portion of the frame might be being packaged by a multiplexer, and a network socket might start streaming the multiplexed portion. 1. A computing device comprising:processing hardware and storage hardware, the storage hardware storing an application that when executed by the processing hardware generates video frames;a framebuffer configured to store the video frames generated by the processing hardware, wherein each video frame comprises segments; andan encoder configured to compress the video frames, wherein the encoder receives a segment of a video frame from the framebuffer before other segments of the video frame have been fully generated and stored in the framebuffer.2. A computing device according to claim 1 , further comprising a multiplexer configured to multiplex the compressed video frames with one or more of audio data claim 1 , subpicture data claim 1 , or media metadata claim 1 , wherein the multiplexer receives a compressed segment of the video frame and begins multiplexing the compressed segment before receiving from the encoder a next compressed segment of the video frame.3. A computing device according to claim 2 , wherein the computing device further comprises a display and is configured to ...

Подробнее
10-03-2016 дата публикации

MEDIA DECODING CONTROL WITH HARDWARE-PROTECTED DIGITAL RIGHTS MANAGEMENT

Номер: US20160070887A1
Принадлежит: MICROSOFT CORPORATION

Innovations in the area of hardware-protected digital rights management (“DRM”) systems are presented. For example, a hardware-protected DRM system includes a trusted layer and untrusted layer. In the untrusted layer, a control module receives source media data that includes encrypted media data. The control module processes metadata about the media data. The metadata, possibly exposed by a module in the trusted layer, is not opaque within the untrusted layer. In the trusted layer, using key data, a module decrypts encrypted media data, which can be the encrypted media data from the source media data or a transcrypted version thereof. A module in the trusted layer decodes the decrypted media data. A host decoder in the untrusted layer uses the metadata to manage at least some aspects of the decoding, rendering and display in the trusted layer, without exposure of decrypted media data or key data within the untrusted layer. 1. One or more computer-readable media storing computer-executable instructions for causing a computing system programmed thereby to perform a method comprising , in an untrusted layer of a hardware-protected digital rights management (“DRM”) system:receiving source media data including first encrypted media data;processing metadata about the media data, wherein the metadata is not opaque within the untrusted layer; andwith a host decoder in the untrusted layer, using the metadata to manage at least some aspects of decoding, rendering and/or display in a trusted layer of the hardware-protected DRM system, wherein the decoding, the rendering and the display follows decryption in the trusted layer of second encrypted media data based on the first encrypted media.2. The one or more computer-readable media of wherein the hardware-protected DRM system includes:for the trusted layer, (a) one or more integrated circuits adapted for decryption and/or decoding, (b) memory storing firmware instructions for controlling the one or more integrated circuits, ...

Подробнее
17-03-2016 дата публикации

Memory management for video decoding

Номер: US20160080756A1
Автор: Shyam Sadhwani, Yongjun Wu
Принадлежит: Microsoft Technology Licensing LLC

Techniques and tools described herein help manage memory efficiently during video decoding, especially when multiple video clips are concurrently decoded. For example, with clip-adaptive memory usage, a decoder determines first memory usage settings expected to be sufficient for decoding of a video clip. The decoder also determines second memory usage settings known to be sufficient for decoding of the clip. During decoding, memory usage is initially set according to the first settings. Memory usage is adaptively increased during decoding, subject to theoretical limits in the second settings. With adaptive early release of side information, the decoder can release side information memory for a picture earlier than the decoder releases image plane memory for the picture. The decoder can also adapt memory usage for decoded transform coefficients depending on whether the coefficients are for intra-coded blocks or inter-coded blocks, and also exploit the relative sparseness of non-zero coefficient values.

Подробнее
16-03-2017 дата публикации

VERIFICATION OF ERROR RECOVERY WITH LONG TERM REFERENCE PICTURES FOR VIDEO CODING

Номер: US20170078705A1
Принадлежит: Microsoft Technology Licensing, LLC

Techniques are described for verifying long-term reference (LTR) usage by a video encoder and/or a video decoder. For example, verifying that a video encoder and/or a video decoder is applying LTR correctly can done by encoding and decoding a video sequence in two different ways and comparing the results. In some implementations, verifying LTR usage is accomplished by decoding an encoded video sequence that has been encoded according to an LTR usage pattern, decoding a modified encoded video sequence that has been encoded according to the LTR usage pattern and modified according to a lossy channel model, and comparing decoded video content from both the encoded video sequence and the modified encoded video sequence. For example, the comparison can comprise determining whether both decoded video content match bit-exactly beginning from an LTR recovery point location. 1. A method , implemented by a computing device , for verifying long term reference picture usage , the method comprising:receiving an encoded video sequence that has been encoded according to a long-term reference (LTR) usage pattern;receiving a modified version of the encoded video sequence, encoded according to the LTR usage pattern, that has been modified according to a lossy channel model that models video data loss in a communication channel;decoding, by a video decoder, the encoded video sequence to create first decoded video content;decoding, by the video decoder, the modified version of the encoded video sequence to create second decoded video content;comparing the first decoded video content and the second decoded video content; andbased on the comparing, outputting an indication of whether the first decoded video content and the second decoded video content match beginning from an LTR recovery point location.2. The method of wherein the LTR usage pattern defines a pattern of LTR usage during encoding claim 1 , and wherein the LTR usage pattern comprises an LTR refresh periodic interval.3. The ...

Подробнее
29-03-2018 дата публикации

Process for producing butadiene by oxidative dehydrogenation of butylene

Номер: US20180086679A1
Принадлежит: Wison Engineering Ltd

The present invention provides a process for producing butadiene by oxidative dehydrogenation of butylene, comprising: a reaction stage, wherein a multi-stage adiabatic fixed bed in series is used, wherein butylene, oxygen-comprising gas and water are reacted in the presence of a catalyst in each stage of the adiabatic fixed bed with the first stage of the adiabatic fixed bed being further separately fed a diluent, being nitrogen and/or carbon dioxide, and the molar ratio between this separately fed diluents and the oxygen of all the oxygen-comprising gases fed in the subsequent stage(s) of the adiabatic fixed bed being controlled, wherein the oxygen-comprising gas is air, oxygen-enriched air or oxygen, and at least one of all the oxygen-comprising gases fed in the subsequent stage(s) of the adiabatic fixed bed is oxygen-enriched air having a specific oxygen concentration or oxygen; and a post treatment stage, wherein the effluent from the last stage of the adiabatic fixed bed is treated to obtain a product butadiene. The present invention has an advantage that the whole process is with reduced total energy consumption.

Подробнее
31-03-2016 дата публикации

PROCESSING PARAMETERS FOR OPERATIONS ON BLOCKS WHILE DECODING IMAGES

Номер: US20160094854A1
Принадлежит:

To decode encoded video using a computer with a central processing unit and a graphics processing unit as a coprocessor, parameters applied to blocks of intermediate image data are transferred from the central processing unit to the graphics processing unit. When the operation being performed applies to a small portion of the blocks of intermediate image data, then the central processing unit can transfer to the graphics processing unit the parameters for only those blocks to which the operation applies. In particular, the central processing unit can transfer a set of parameters for a limited number of blocks of intermediate image data, with an indication of the block to which each set of parameters applies, which both can improve speed of operation and can reduce power consumption. 1. A computer-implemented process comprising:receiving a bitstream of encoded data, the encoded data including parameters for operations to be performed on blocks of intermediate image data;analyzing the parameters to determine whether the operation is sparsely applied to the intermediate image data; andin response to determining that the parameters are sparsely applied to the intermediate image data, generating a representation of the parameters to include, for each set of parameters to be applied to a block, an indication of the block to which the set of parameters is to be applied.2. The computer-implemented process of claim 1 , further comprising decoding the bitstream using the generated representation of the parameters.3. The computer-implemented process of claim 1 , further comprising storing the generated representation of the parameters in association with the bitstream.4. The computer-implemented process of claim 1 , further comprising providing the generated representation of the parameters to a graphics processing unit.5. The computer-implemented process of claim 4 , further comprising instructing the graphics processing unit to apply the generated representation of the ...

Подробнее
30-03-2017 дата публикации

GENERIC PLATFORM VIDEO IMAGE STABILIZATION

Номер: US20170094172A1
Принадлежит: Microsoft Technology Licensing, LLC

Video image stabilization provides better performance on a generic platform for computing devices by evaluating available multimedia digital signal processing components, and selecting the available components to utilize according to a hierarchy structure for video stabilization performance for processing parts of the video stabilization. The video stabilization has improved motion vector estimation that employs refinement motion vector searching according to a pyramid block structure relationship starting from a downsampled resolution version of the video frames. The video stabilization also improves global motion transform estimation by performing a random sample consensus approach for processing the local motion vectors, and selection criteria for motion vector reliability. The video stabilization achieves the removal of hand shakiness smoothly by real-time one-pass or off-line two-pass temporal smoothing with error detection and correction. 143-. (canceled)44. A method of real-time sharing of stabilized digital video for multiple frames of a captured video sequence , comprising:estimating a motion transform that represents jittery motion of a video capture device;warping at least a portion of a frame, among the multiple frames of the captured video sequence, based on the motion transform to compensate for the jittery motion of the video capture device; andinitiating uploading of stabilized video from the video capture device to a server device associated with a service for video sharing, storage, or distribution.45. The method of claim 45 , wherein said warping comprises use of a vertex shader of a processing unit.46. A device comprising:one or more processing units;one or more memory units;a camera; andthe one or more memory units storing computer-executable instructions for causing the device, when programmed thereby, to perform real-time digital video stabilization that includes: estimating, using at least one of the processing units, a motion transform that ...

Подробнее
07-04-2016 дата публикации

Syntax structures indicating completion of coded regions

Номер: US20160100196A1
Принадлежит: Microsoft Technology Licensing LLC

Syntax structures that indicate the completion of coded regions of pictures are described. For example, a syntax structure in an elementary bitstream indicates the completion of a coded region of a picture. The syntax structure can be a type of network abstraction layer unit, a type of supplemental enhancement information message or another syntax structure. For example, a media processing tool such as an encoder can detect completion of a coded region of a picture, then output, in a predefined order in an elementary bitstream, syntax structure(s) that contain the coded region as well as a different syntax structure that indicates the completion of the coded region. Another media processing tool such as a decoder can receive, in a predefined order in an elementary bitstream, syntax structure(s) that contain a coded region of a picture as well as a different syntax structure that indicates the completion of the coded region.

Подробнее
06-04-2017 дата публикации

MPEG TRANSPORT FRAME SYNCHRONIZATION

Номер: US20170098088A1
Принадлежит:

Techniques are described for communicating encoded data using start code emulation prevention. The described techniques include obtaining at least one partially encrypted packet, identifying at least one portion of the packet that is unencrypted, and determining that the identified unencrypted portion(s) emulates a start code. Start code emulation prevention data or emulation prevention bytes (EPBs) may be inserted into only the encrypted portion of the packet. The modified packet may be communicated to another device/storage, along with an indication of which portion(s) of the packet are unencrypted. Upon receiving the packet and indication, the receiving device may identify and remove the EPBs in the identified unencrypted portion(s) of the packet, and decrypt the packet to recover the data. In some aspects, upon identifying the indication, the receiving device may only search for EPBs in the unencrypted portion(s) of the packet, thus yielding a more efficient start code emulation prevention process. 1. A system for communicating encoded data using start code emulation prevention , the system comprising: receive at least one partially encrypted packet;', 'identify at least one portion of the at least one partially encrypted packet that is unencrypted;', 'determine that the at least one identified unencrypted portion of the at least one partially encrypted packet has start code emulation; and', 'in response to the determining that the at least one identified unencrypted portion has start code emulation, insert start code emulation prevention data only in the at least one partially encrypted packet;, 'a computing device communicatively coupled to a transmitter, the computing device configured towherein the transmitter is configured to:transmit the at least one partially encrypted packet with an indication of the at least one identified unencrypted portion of the at least one partially encrypted packet.2. The system of claim 1 , further comprising:a second computing ...

Подробнее
14-04-2016 дата публикации

Buffer Optimization

Номер: US20160104457A1
Автор: Sadhwani Shyam, Wu Yongjun
Принадлежит:

Buffer optimization techniques are described herein in which a graphics processing system is configured to implement and select between a plurality of buffer schemes for processing of an encoded data stream in dependence upon formats used for decoding and rendering (e.g., video format, bit depth, resolution, content type, etc.) and device capabilities such as available memory and/or processing power. Processing of an encoded data stream for display and rendering via the graphics processing system then occurs using a selected one of the buffer schemes to define buffers employed for the decoding and rendering, including at least configuring the sizes of buffers. The plurality of schemes may include at least one buffer scheme for processing the encoded content when the input format and the output format are the same, and a different buffer scheme for processing the encoded content when the input format and the output format are different. 1. A computer-implemented method comprising:obtaining a data stream of encoded content for display via a device;establishing an input format and an output format for processing of the encoded content to enable the display via the device;ascertaining capabilities of the device;selecting a buffer scheme to use for processing of the encoded content from a plurality of available buffer schemes in dependence upon the established input format and output format and the ascertained capabilities of the device;allocating buffers for processing of the encoded content in accordance with the selected buffer scheme; andprocessing the encoded content for display via the device using the allocated buffers.2. The computer-implemented method of claim 1 , further comprising configuring sizes for a decoding picture buffer and an output picture buffer as part of the allocating claim 1 , the sizes specified by the buffer scheme that is selected.3. The computer-implemented method of claim 2 , further comprising claim 2 , as part of the processing claim 2 , ...

Подробнее
14-04-2016 дата публикации

Video Parameter Techniques

Номер: US20160105678A1
Автор: Dalal Firoz, Wu Yongjun
Принадлежит:

Video parameter storage and processing techniques with MPEG-4 file format are described. In one or more implementations, techniques are described in which sequence and parameter sets are specified in-band with collections of pictures of video as the default option. Techniques are also described in which different parameter set identifiers (IDs) are specified for the collections within the video. Techniques are also described in which maximum clip parameters are specified in a sample description box. Further, techniques are described in which parameter sets are inserted at a beginning of sample data when an access unit delimiter (AUD) network access layer (NAL) unit is not present or are inserted after the AUD NAL unit in the video when present. 1. A method comprising:receiving video at a device that includes first and second collections of pictures; andencoding the video by the device to include a first sequence and picture parameter set that is associated in-band with the first collection of pictures and a second sequence and picture parameter set that is associated in-band with the second collection of pictures.2. A method as described in claim 1 , wherein the video is configured in accordance with H.264/MPEG-4 AVC.3. A method as described in claim 1 , wherein the video is configured in accordance with High Efficiency Video Coding (HEVC).4. A method as described in claim 1 , wherein the first and second collections include pictures having different encoding or decoding characteristics claim 1 , one to another.5. A method as described in claim 1 , wherein the first and second collections include pictures having different resolutions claim 1 , profiles claim 1 , levels claim 1 , or aspect ratios.6. A method as described in claim 1 , wherein the first and second sequence and picture parameters sets describe differences in infrequently changing parameter information.7. A device comprising: receiving video that includes first and second collections of pictures, in ...

Подробнее
21-04-2016 дата публикации

Video stabilization using padded margin pixels

Номер: US20160112638A1
Принадлежит: Microsoft Technology Licensing LLC

One or more techniques and/or systems are provided for video stabilization and/or for image frame generation. For example, a user may instruct a video application hosted on a smart phone to capture a video at a target resolution of 1080 pixels. A padded input having a padded resolution that is larger than the target resolution may be obtained from a capture device, such as a camera of the smart phone. The padded input may be provided to a video stabilization component to obtain a target image frame having the target resolution. In this way, the video stabilization component may perform cropping using padded margin pixels (e.g., additional pixels of the padded input beyond the 1080 pixels of the target resolution) so that image upscaling after cropping (e.g., to account for global warping, etc.) may be mitigated to reduce blur that may otherwise result from image upscaling.

Подробнее
28-04-2016 дата публикации

Content Adaptive Decoder Quality Management

Номер: US20160117796A1
Принадлежит: Microsoft Technology Licensing LLC

In one example, a quality management controller of a video processing system may optimize a video recovery action through the selective dropping of video frames. The video processing system may store a compressed video data set in memory. The video processing system may receive a recovery quality indication describing a recovery priority of a user. The video processing system may apply a quality management controller in a video pipeline to execute a video recovery action to retrieve an output data set from the compressed video data set using a video decoder. The quality management controller may select a recovery initiation frame from the compressed video data set to be an initial frame to decompress based upon the recovery quality indication.

Подробнее
18-04-2019 дата публикации

IMAGE PREPROCESSING FOR GENERALIZED IMAGE PROCESSING

Номер: US20190114499A1
Принадлежит: XILINX, INC.

An example preprocessor circuit for formatting image data into a plurality of streams of image samples includes: a first buffer configured to store a plurality of rows of the image data and output a row of the plurality of rows; a second buffer, coupled to the first buffer, including a plurality of storage locations to store a respective plurality of image samples of the row output by the first buffer; a plurality of shift registers; an interconnect network including a plurality of connections, each connection coupling a respective one of the plurality of shift registers to more than one of the plurality of storage locations, one or more of the plurality of storage locations being coupled to more than one of the plurality of connections; and a control circuit configured to load the plurality of shift registers with the plurality of image samples based on the plurality of connections and shift the plurality of shift registers to output the plurality of streams of image samples. 1. A preprocessor circuit for formatting image data into a plurality of streams of image samples , the preprocessor circuit comprising:a first buffer configured to store a plurality of rows of the image data and output a row of the plurality of rows;a second buffer, coupled to the first buffer, including a plurality of storage locations to store a respective plurality of image samples of the row output by the first buffer;a plurality of shift registers;an interconnect network including a plurality of connections, each connection coupling a respective one of the plurality of shift registers to more than one of the plurality of storage locations, one or more of the plurality of storage locations being coupled to more than one of the plurality of connections; anda control circuit configured to load the plurality of shift registers with the plurality of image samples based on the plurality of connections and shift the plurality of shift registers to output the plurality of streams of image samples ...

Подробнее
18-04-2019 дата публикации

MULTI-LAYER NEURAL NETWORK PROCESSING BY A NEURAL NETWORK ACCELERATOR USING HOST COMMUNICATED MERGED WEIGHTS AND A PACKAGE OF PER-LAYER INSTRUCTIONS

Номер: US20190114529A1
Принадлежит: XILINX, INC.

In the disclosed methods and systems for processing in a neural network system, a host computer system writes a plurality of weight matrices associated with a plurality of layers of a neural network to a memory shared with a neural network accelerator. The host computer system further assembles a plurality of per-layer instructions into an instruction package. Each per-layer instruction specifies processing of a respective layer of the plurality of layers of the neural network, and respective offsets of weight matrices in a shared memory. The host computer system writes input data and the instruction package to the shared memory. The neural network accelerator reads the instruction package from the shared memory and processes the plurality of per-layer instructions of the instruction package. 1. A method comprising:writing by a host computer system, a plurality of weight matrices associated with a plurality of layers of a neural network to a memory shared with a neural network accelerator;assembling a plurality of per-layer instructions into an instruction package by the host computer system, each per-layer instruction specifying processing of a respective layer of the plurality of layers of the neural network, and respective offsets of weight matrices in a shared memory;writing input data and the instruction package by the host computer system to the shared memory;reading the instruction package from the shared memory by the neural network accelerator; andprocessing the plurality of per-layer instructions of the instruction package by the neural network accelerator.2. The method of claim 1 , wherein the writing of the plurality of weight matrices includes writing all of the plurality of weight matrices to the shared memory before the processing of the plurality of per-layer instructions.3. The method of claim 1 , wherein the writing of the plurality of weight matrices includes writing all of the plurality of weight matrices to contiguous address space in the shared ...

Подробнее
17-05-2018 дата публикации

Detecting Markers in an Encoded Video Signal

Номер: US20180139463A1
Автор: Lin Chih-Lung, Wu Yongjun
Принадлежит: Microsoft Technology Licensing, LLC

A video decoding method is implemented by a computer having multiple parallel processing units. A stream of data elements is received, some of which encode video content. The stream comprises marker sequences, each marker sequence comprising a marker which does not encode video content. A known pattern of data elements occurs in each marker sequence. A respective part of the stream is supplied to each parallel processing unit. Each parallel processing unit processes the respective part of the stream, whereby multiple parts of the stream are processed in parallel, to detect whether any of the multiple parts matches the known pattern of data elements, thereby identifying the markers. The encoded video content is separated from the identified markers. The separated video content is decoded, and the decoded video content outputted on a display. 1. A computer-implemented method comprising:determining a type of a data packet in a data stream that includes data packets and marker sequences; forming a plurality of subsequences from a respective part of the data packet, each element of the plurality of subsequences being a first value or a second value;', 'summing, for each subsequence, elements of said each subsequence to form a plurality of sums; and', 'indicating the one or more occurrences when the plurality of sums indicate one or more saturations; and', 'removing emulation prevention markers following at least some of the one or more occurrences of the known pattern of data elements in the data packet to generate a decodable version of the data packet., 'detecting in the data packet one or more occurrences of a known pattern of data elements by, responsive to the type of the data packet indicating that the data packet includes encoded video content, processing in parallel different parts of the data packet with different respective parallel processing units by, for each parallel processing unit2. The method according to wherein the data stream includes real-time video ...

Подробнее
14-08-2014 дата публикации

Memory management for video decoding

Номер: US20140226727A1
Автор: Shyam Sadhwani, Yongjun Wu
Принадлежит: Microsoft Corp

Techniques and tools described herein help manage memory efficiently during video decoding, especially when multiple video clips are concurrently decoded. For example, with clip-adaptive memory usage, a decoder determines first memory usage settings expected to be sufficient for decoding of a video clip. The decoder also determines second memory usage settings known to be sufficient for decoding of the clip. During decoding, memory usage is initially set according to the first settings. Memory usage is adaptively increased during decoding, subject to theoretical limits in the second settings. With adaptive early release of side information, the decoder can release side information memory for a picture earlier than the decoder releases image plane memory for the picture. The decoder can also adapt memory usage for decoded transform coefficients depending on whether the coefficients are for intra-coded blocks or inter-coded blocks, and also exploit the relative sparseness of non-zero coefficient values.

Подробнее
01-06-2017 дата публикации

VIDEO DECODING IMPLEMENTATIONS FOR A GRAPHICS PROCESSING UNIT

Номер: US20170155907A1
Принадлежит: Microsoft Technology Licensing, LLC

Video decoding innovations for multithreading implementations and graphics processor unit (“GPU”) implementations are described. For example, for multithreaded decoding, a decoder uses innovations in the areas of layered data structures, picture extent discovery, a picture command queue, and/or task scheduling for multithreading. Or, for a GPU implementation, a decoder uses innovations in the areas of inverse transforms, inverse quantization, fractional interpolation, intra prediction using waves, loop filtering using waves, memory usage and/or performance-adaptive loop filtering. Innovations are also described in the areas of error handling and recovery, determination of neighbor availability for operations such as context modeling and intra prediction, CABAC decoding, computation of collocated information for direct mode macroblocks in B slices, reduction of memory consumption, implementation of trick play modes, and picture dropping for quality adjustment. 120.-. (canceled)21. A computer system comprising one or more processing units , memory , and storage , wherein the memory and/or the storage has stored therein computer-executable instructions for causing the computer system , when programmed thereby , to perform video processing comprising:receiving encoded data for a picture that includes plural blocks; anddecoding the encoded data to reconstruct the picture, including performing decoding operations for the plural blocks, on a wave-by-wave basis, as plural waves, each of the plural waves including one or more of the plural blocks such that block-to-block dependencies are not permitted within a given wave of the plural waves but are permitted between the given wave and any preceding waves of the plural waves, wherein, for at least one of the plural waves, at least some of the one or more blocks within the wave are processed in parallel.22. The computer system of claim 21 , wherein the plural waves roughly correspond to diagonal lines of blocks within the ...

Подробнее
25-06-2015 дата публикации

Object Detection Techniques

Номер: US20150178552A1
Принадлежит: MICROSOFT CORPORATION

Object detection techniques are described. In one or more implementations, a plurality of images are received by a computing device. The plurality of images are analyzed by the computing device to detect whether the images include, respectively, a depiction of an object. If an object is found in a first image, the locations, angles and scales for object detection can be further restricted in a second one. If an object is not found in a first one of the image, different portions of a second one of the images are analyzed for object detection. 1. A method comprising:receiving a plurality of images by a computing device;analyzing the plurality of images by the computing device to detect whether the images include, respectively, a depiction of an object, the analyzing performed such that one or more portions used to detect the object in a first said image is different than one or more portions used to detect the object in a second said image.2. A method as described in claim 1 , wherein the difference in the one or more portions for the respective first and second said images involves a scale assigned for the object to perform the analysis.3. A method as described in claim 1 , wherein the difference in the one or more portions for the respective first and second said images involves a region of the respective first and second said images that is utilized to perform the analysis.4. A method as described in claim 1 , wherein the difference in the one or more portions for the respective first and second said images involves an angle at which the object is to be detected in the respective first and second said images.5. A method as described in claim 1 , wherein the difference in the one or more portions for at least one of the respective first and second said images is based on a previous detection of the object in one or more of the plurality of frames.6. A method as described in claim 5 , wherein the previous detection in indicates a scale of the object that is to be ...

Подробнее
07-07-2016 дата публикации

Video Decoding

Номер: US20160198171A1
Автор: Lin Chih-Lung, Wu Yongjun
Принадлежит:

A video decoding method is implemented by a computer having multiple parallel processing units. A stream of data elements is received, some of which encode video content. The stream comprises marker sequences, each marker sequence comprising a marker which does not encode video content. A known pattern of data elements occurs in each marker sequence. A respective part of the stream is supplied to each parallel processing unit. Each parallel processing unit processes the respective part of the stream, whereby multiple parts of the stream are processed in parallel, to detect whether any of the multiple parts matches the known pattern of data elements, thereby identifying the markers. The encoded video content is separated from the identified markers. The separated video content is decoded, and the decoded video content outputted on a display. 1. A video decoding method , implemented by a computer having multiple parallel processing units , comprising:receiving a stream of data elements, some of which encode video content, the stream comprising marker sequences, each marker sequence comprising a marker which does not encode video content, wherein a known pattern of data elements occurs in each marker sequence;supplying a respective part of the stream to each parallel processing unit;each parallel processing unit processing the respective part of the stream, whereby multiple parts of the stream are processed in parallel, to detect whether any of the multiple parts matches the known pattern of data elements, thereby identifying the markers;separating the encoded video content from the identified markers;decoding the separated video content; andoutputting the decoded video content on a display.2. A method according to wherein at least some of the markers are dividing markers which divide the stream into packets claim 1 , the dividing markers identified to identify the packets.3. A method according to claim 2 , wherein each packet comprises payload data and header data ...

Подробнее
02-10-2014 дата публикации

CUSTOM DATA INDICATING NOMINAL RANGE OF SAMPLES OF MEDIA CONTENT

Номер: US20140294094A1
Принадлежит: MICROSOFT CORPORATION

A media processing tool adds custom data to an elementary media bitstream or media container. The custom data indicates nominal range of samples of media content, but the meaning of the custom data is not defined in the codec format or media container format. For example, the custom data indicates the nominal range is full range or limited range. For playback, a media processing tool parses the custom data and determines an indication of media content type. A rendering engine performs color conversion operations whose logic changes based at least in part on the media content type. In this way, a codec format or media container format can in effect be extended to support full nominal range media content as well as limited nominal range media content, and hence preserve full or correct color fidelity, while maintaining backward compatibility and conformance with the codec format or media container format. 1. A method comprising:with a media processing tool, adding custom data to encoded media content, wherein the custom data indicates nominal range of samples; andoutputting the custom data and the encoded media content.2. The method of wherein the custom data is added as one or more syntax elements in an elementary media bitstream that also includes syntax elements for the encoded media content claim 1 , such that backward compatibility and conformance with format of the elementary media bitstream are maintained.3. The method of wherein the one or more syntax elements for the custom data are added in the elementary media bitstream as entry point user data.4. The method of wherein the media content is video content claim 2 , wherein the elementary media bitstream is an elementary video bitstream claim 2 , wherein the media processing tool is a video encoder claim 2 , and wherein the method further comprises:receiving an indication of video content type provided by a video source;receiving input video content provided by the video source; andproducing the elementary ...

Подробнее
21-07-2016 дата публикации

Encoding/decoding of high chroma resolution details

Номер: US20160212433A1
Принадлежит: Microsoft Technology Licensing LLC

Innovations in encoding and decoding of video pictures in a high-resolution chroma sampling format (such as YUV 4:4:4) using a video encoder and decoder operating on coded pictures in a low-resolution chroma sampling format (such as YUV 4:2:0) are presented. For example, high chroma resolution details are selectively encoded on a region-by-region basis. Or, as another example, coded pictures that contain sample values for low chroma resolution versions of input pictures and coded pictures that contain sample values for high chroma resolution details of the input pictures are encoded as separate sub-sequences of a single sequence of coded pictures, which can facilitate effective motion compensation. In this way, available encoders and decoders operating on coded pictures in the low-resolution chroma sampling format can be effectively used to provide high chroma resolution details.

Подробнее
09-10-2014 дата публикации

CONTROL DATA FOR MOTION-CONSTRAINED TILE SET

Номер: US20140301464A1
Принадлежит: MICROSOFT CORPORATION

Control data for a motion-constrained tile set (“MCTS”) indicates that inter-picture prediction processes within a specified set of tiles are constrained to reference only regions within the same set of tiles in previous pictures in decoding (or encoding) order. For example, a video encoder encodes multiple pictures partitioned into tiles to produce encoded data. The encoder outputs the encoded data along with control data (e.g., in a supplemental enhancement information message) that indicates that inter-picture prediction dependencies across tile set boundaries are constrained for a given tile set of one or more of the tiles. A video decoder or other tool receives the encoded data and MCTS control data, and processes the encoded data. Signaling and use of MCTS control data can facilitate region-of-interest decoding and display, transcoding to limit encoded data to a selected set of tiles, loss robustness, parallelism in encoding and/or decoding, and other video processing. 1. A computer system adapted to perform a method comprising:encoding multiple pictures to produce encoded data, wherein each of the multiple pictures is partitioned into multiple tiles; andoutputting the encoded data along with control data that indicates that inter-picture prediction dependencies across specific boundaries are constrained for a given tile set of one or more tiles of the multiple tiles, wherein the given tile set is parameterized in the control data as one or more tile regions covering the one or more tiles of the multiple tiles.2. The computer system of wherein the one or more tile regions are one or more tile rectangles claim 1 , and wherein the control data includes claim 1 , for a given tile rectangle of the one or more tile rectangles in the given tile set claim 1 , syntax elements that identify two corners of the given tile rectangle.3. The computer system of wherein the two corners are a top-left corner of the given tile rectangle and a bottom-right corner of the given ...

Подробнее
28-07-2016 дата публикации

METADATA ASSISTED VIDEO DECODING

Номер: US20160219288A1
Принадлежит: Microsoft Technology Licensing, LLC

A video decoder is disclosed that uses metadata in order to make optimization decisions. In one embodiment, metadata is used to choose which of multiple available decoder engines should receive a video sequence. In another embodiment, the optimization decisions can be based on length and location metadata information associated with a video sequence. Using such metadata information, a decoder engine can skip start-code scanning to make the decoding process more efficient. Also based on the choice of decoder engine, it can decide whether emulation prevention byte removal shall happen together with start code scanning or not. 1. In a computing device that implements a video decoder , a method comprising:receiving an encoded video sequence with a file container;with the computing device that implements the video decoder, analyzing metadata associated with the encoded video sequence in the file container; andusing the metadata to make decoder optimization decisions in the video decoder.2. The method of claim 1 , wherein the decoder optimization decisions include choosing a decoder engine claim 1 , based on the metadata claim 1 , to perform the decoding from a plurality of decoder engines.3. The method of claim 2 , wherein the plurality of decoder engines are chosen from a list including of one of the following: a decoder engine capable of decoding a video sequence of main profile and higher profiles and a decoder engine capable of decoding baseline claim 2 , main and higher profiles.4. The method of claim 3 , wherein the decoder engine capable of decoding a video sequence of main profile and higher profiles includes a graphics processing unit for hardware acceleration and the decoder engine capable of decoding baseline claim 3 , main and higher profiles includes a central processing unit.5. The method of claim 1 , further including searching the metadata for a type of algorithm used in the encoding and choosing a decoding engine based on the type of algorithm.6. The ...

Подробнее
05-08-2021 дата публикации

ORGANIC LIGHT EMITTING MATERIAL

Номер: US20210242411A1
Принадлежит:

Provided is an organic light-emitting material. The light-emitting material is a series of metal complexes containing a ligand(s) based on isoquinoline which is substituted with deuterium at 3- and 4-position and a ligand(s) based on acetylacetone. The compounds can be used as the light-emitting material in an emissive layer of an organic electroluminescent device. These novel compounds can provide better device performance. Further provided are an electroluminescent device and a compound combination including the light-emitting material. 2. The metal complex according to claim 1 , wherein the metal M is selected from the group consisting of Cu claim 1 , Ag claim 1 , Au claim 1 , Ru claim 1 , Rh claim 1 , Pd claim 1 , Os claim 1 , Ir and Pt; preferably claim 1 , the metal M is selected from Pt or Ir.3. The metal complex according to claim 1 , wherein at least one of Xto Xis selected from CR; preferably claim 1 , wherein Xto Xare claim 1 , at each occurrence identically or differently claim 1 , selected from CR.4. The metal complex according to claim 1 , wherein Xand/or Xare claim 1 , at each occurrence identically or differently claim 1 , selected from CR claim 1 , and Ris claim 1 , at each occurrence identically or differently claim 1 , selected from the group consisting of: hydrogen claim 1 , deuterium claim 1 , halogen claim 1 , substituted or unsubstituted alkyl having 1 to 20 carbon atoms claim 1 , substituted or unsubstituted cycloalkyl having 3 to 20 ring carbon atoms claim 1 , substituted or unsubstituted heteroalkyl having 1 to 20 carbon atoms claim 1 , substituted or unsubstituted aralkyl having 7 to 30 carbon atoms claim 1 , substituted or unsubstituted alkoxy having 1 to 20 carbon atoms claim 1 , substituted or unsubstituted aryloxy having 6 to 30 carbon atoms claim 1 , substituted or unsubstituted alkenyl having 2 to 20 carbon atoms claim 1 , substituted or unsubstituted aryl having 6 to 30 carbon atoms claim 1 , substituted or unsubstituted heteroaryl ...

Подробнее
04-07-2019 дата публикации

PROTECTED MEDIA DECODING SYSTEM SUPPORTING METADATA

Номер: US20190208276A1
Принадлежит:

Video content is protected using a digital rights management (DRM) mechanism, the video content having been previously encrypted and compressed for distribution, and also including metadata such as closed captioning data, which might be encrypted or clear. The video content is obtained by a system of a computing device, the metadata is extracted from the video content and provided to a video decoder, and the video content is provided to a secure DRM component. The secure DRM component decrypts the video content and provides the decrypted video content to a secure decoder component of a video decoder. As part of the decryption, the secure DRM component drops the metadata that was included in the obtained video content. However, the video decoder receives the extracted metadata in a non-protected environment and thus is able to provide the extracted metadata and the decoded video content to a content playback application. 1. A method implemented in a computing device , the method comprising:obtaining video content from a media source, the video content including metadata as well as protected video content;extracting the metadata from the video content to obtain extracted metadata, the extracting the metadata including removing the metadata from the video content;providing the extracted metadata to a video decoder without providing the video content to the video decoder for processing by the video decoder;providing the video content with the metadata removed from the extracting to a secure digital rights management component;receiving, from the secure digital rights management component, a re-encrypted version of the video content, the re-encrypted version of the video content comprising a version of the video content from which the protected video content has been decrypted and re-encrypted based on a key of the computing device;providing the re-encrypted version of the video content to the video decoder for decoding of the re-encrypted version of the video content to ...

Подробнее
23-10-2014 дата публикации

PROTECTED MEDIA DECODING USING A SECURE OPERATING SYSTEM

Номер: US20140314233A1
Принадлежит: MICROSOFT CORPORATION

Disclosed herein are representative embodiments of tools and techniques for facilitating decoding of protected media information using a secure operating system. According to one exemplary technique, encoded media information that is encrypted is received at a secure process of a secure operating system of a computing system. At least a portion of the encoded media information that is encrypted is decrypted in the secure process. The portion of the encoded media information includes header information. Additionally, the header information is sent from the secure operating system to a software decoder for control of decoding hardware. The software decoder is included in a process for an application. Also, the decoding hardware is securely provided access to the encoded media information for decoding of the encoded media information to produce decoded media information. 1. A method comprising:at a secure process of a secure operating system of a computing system, receiving encoded media information that is encrypted;in the secure process, decrypting at least a portion of the encoded media information that is encrypted, the at least a portion of the encoded media information comprising header information;from the secure operating system, sending the header information to a software decoder for control of decoding hardware; andsecurely providing the decoding hardware access to the encoded media information for decoding of the encoded media information to produce decoded media information.2. The method of claim 1 , wherein the decrypting at least the portion of the encoded media information comprises determining an amount of the encoded media information that includes the header information based at least in part on a cap amount of data or on an encoding format of the encoded media information.3. The method of claim 1 , wherein the sending the header information to the software decoder facilitates control of the decoding hardware through the software decoder sending one ...

Подробнее
20-08-2015 дата публикации

HOST ENCODER FOR HARDWARE-ACCELERATED VIDEO ENCODING

Номер: US20150237356A1
Принадлежит: MICROSOFT CORPORATION

By controlling decisions for high layers of bitstream syntax for encoded video, a host encoder provides consistent behaviors even when used with accelerator hardware from different vendors across different hardware platforms. For example, the host encoder controls high-level behaviors of encoding and sets values of syntax elements for sequence layer and picture layer of an output bitstream (and possibly other layers such as slice-header layer), while using only a small amount of computational resources. An accelerator that includes the accelerator hardware then controls encoding decisions for lower layers of syntax, in a manner consistent with the values of syntax elements set by the host encoder, setting values of syntax elements for the lower layers of syntax, which allows the accelerator some flexibility in making its encoding decisions. 1. One or more computer-readable media storing computer-executable instructions for causing a computing system programmed thereby to perform a method comprising:with a host encoder, setting values of encoding control properties in response to one or more calls by an application across an interface exposed by the host encoder;with the host encoder, setting values of syntax elements of an output bitstream for at least one of sequence-layer syntax and picture-layer syntax for media;with the host encoder, filling one or more control structures with values of control parameters; andwith the host encoder, initiating encoding of the media by an accelerator that includes accelerator hardware, wherein the one or more control structures are passed across an accelerator interface situated between the host encoder and the accelerator hardware, thereby facilitating control by the accelerator of encoding operations subject to the values of syntax elements set by the host encoder for the at least one of sequence-layer syntax and picture-layer syntax.2. The one or more computer-readable media of wherein the interface exposed by the host encoder ...

Подробнее
10-08-2017 дата публикации

VIDEO DECODER MEMORY OPTIMIZATION

Номер: US20170230677A1
Принадлежит:

Techniques are described for optimizing memory used by a video decoder. A residual coefficient matrix including non-zero value residual coefficients of a larger parent matrix with both non-zero and zero value residual coefficients can be provided to the decoder. Residual coefficient matrix metadata can also be provided so that a modified and reduced inverse transform matrix can be selected and applied to the residual coefficient matrix. 1. A computer implemented method comprising:receiving, by an electronic device, media content data including a residual coefficient matrix representing differences between a portion of an image frame of media content and one or more reference frames of the media content, the residual coefficient matrix including non-zero value residual coefficients of a parent matrix, and the media content data including residual coefficient matrix metadata indicating a size the parent matrix;determining, by the electronic device, a size of the residual coefficient matrix;selecting, by the electronic device, an inverse transform matrix based on the size of the residual coefficient matrix and the size of the parent matrix; andapplying, by the electronic device, the inverse transform matrix to the residual coefficient matrix to decode the portion of the image frame of the media content.2. The computer implemented method of claim 1 , wherein the size of the parent matrix is larger than the size of the residual coefficient matrix.3. The computer implemented method of claim 2 , wherein parent matrix includes the non-zero value residual coefficients and zero value residual coefficients.4. The computer implemented method of claim 1 , wherein a size of the inverse transform matrix and the size of the residual coefficient matrix are the same.5. A system claim 1 , comprising:one or more processors and memory configured to:receive media content data indicating a residual coefficient matrix corresponding to a portion of an image frame of media content, the ...

Подробнее
16-08-2018 дата публикации

SIGNALING OF STATE INFORMATION FOR A DECODED PICTURE BUFFER AND REFERENCE PICTURE LISTS

Номер: US20180234698A1
Принадлежит: Microsoft Technology Licensing, LLC

Innovations for signaling state of a decoded picture buffer (“DPB”) and reference picture lists (“RPLs”). In example implementations, rather than rely on internal state of a decoder to manage and update DPB and RPLs, state information about the DPB and RPLs is explicitly signaled. This permits a decoder to determine which pictures are expected to be available for reference from the signaled state information. For example, an encoder determines state information that identifies which pictures are available for use as reference pictures (optionally considering feedback information from a decoder about which pictures are available). The encoder sets syntax elements that represent the state information. In doing so, the encoder sets identifying information for a long-term reference picture (“LTRP”), where the identifying information is a value of picture order count least significant bits for the LTRB. The encoder then outputs the syntax elements as part of a bitstream. 144.-. (canceled)45. A computing system comprising a processor and memory , wherein the computing system implements a video decoder , and wherein the computing system is configured to perform operations comprising:receiving at least part of a bitstream;parsing syntax elements from the bitstream, wherein the syntax elements represent long-term reference picture (“LTRP”) status information for a current picture among pictures of a sequence, wherein the LTRP status information for the current picture identifies which pictures, if any, are available for use as LTRPs for the current picture, the syntax elements including identifying information for a given LTRP in the LTRP status information for the current picture, and wherein the identifying information for the given LTRP is a value of picture order count least significant bits (“POC LSBs”), modulo a most significant bit wrapping point, for the given LTRP for the current picture; andusing the LTRP status information during decoding, wherein the value of the ...

Подробнее
24-09-2015 дата публикации

FAST AND SMART VIDEO TRIMMING AT FRAME ACCURACY ON GENERIC PLATFORM

Номер: US20150269967A1
Принадлежит: MICROSOFT CORPORATION

In a computing device that implements an encoder, a method comprises receiving an encoded video sequence with a file container, receiving input to execute a trimming operation to create a frame accurate target segment of one or more desired pictures from the encoded video sequence and trimming to frame accuracy. Trimming to frame accuracy is accomplished by changing the parameter identifications of leading and trailing portions, if supported, or changing the parameters, and using the changed parameters or parameter identifications in re-encoding the leading and trailing portions, while an untouched middle portion between the leading and trailing portions is re-muxed without re-encoding. 1. One or more computer-readable media storing computer-executable instructions for causing a computing system programmed thereby to perform a method comprising:receiving media content comprising a stream of pictures comprising clear start pictures and other pictures;receiving a target range extending from a target range starting point to a target range ending point within the stream of pictures and specifying at least one desired picture from which other pictures in the stream of pictures will be trimmed;determining a first clear start picture preceding the target range starting point and a second clear start picture following the target range starting point;decoding the media content from the first clear start picture to the second clear start picture;re-encoding the media content in a leading portion of the target range defined from the target range starting point to the second clear start picture;determining a third clear start picture preceding the target range ending point;re-muxing pictures in a middle portion defined to extend from the second clear start picture to the third clear start picture;determining a fourth clear start picture following the target range ending point;decoding the media content from the third clear start picture to the fourth clear start picture;re- ...

Подробнее
22-09-2016 дата публикации

PACKAGING/MUX AND UNPACKAGING/DEMUX OF GEOMETRIC DATA TOGETHER WITH VIDEO DATA

Номер: US20160277751A1
Принадлежит:

Technologies are described herein for providing enhanced packaging, coding, decoding and unpackaging of geometric data. In some configurations, geometric data is obtained by a device. The geometric data is partitioned into data partitions representing reconstruction information for video frames. The data partitions representing frames are then converted and integrated into a network abstraction layer of a bit stream. Geometric data may be obtained from the bit stream by accessing the data partitions from the network abstraction layer. The data partitions can be then processed into geometric data for further processing, such as the reconstruction, generation, display or processing of a three dimensional (3D) object modeled by the geometric data. 1. A computer-implemented method , the method comprising:obtaining geometric data;obtaining video data;partitioning the geometric data into individual geometric data partitions associated with individual frames;generating individual network abstraction layer-compliant geometric data partitions from the individual geometric data partitions;partitioning the video data into individual video data partitions associated with the individual frames; andintegrating the individual network abstraction layer-compliant geometric data partitions with the individual video data partitions into a network abstraction layer of a bit stream conformant to a video coding standard and a file format standard.2. The method of claim 1 , further comprising:parsing the bit stream to extract the individual network abstraction layer-compliant geometric data partitions and the individual video data partitions;generating the individual geometric data partitions from the individual network abstraction layer-compliant geometric data partitions;processing the individual geometric data partitions to generate the geometric data; andprocessing the individual video data partitions to generate the video data.3. The method of claim 1 , wherein an individual network ...

Подробнее
22-09-2016 дата публикации

APPLICATION- OR CONTEXT-GUIDED VIDEO DECODING PERFORMANCE ENHANCEMENTS

Номер: US20160277768A1
Принадлежит: Microsoft Technology Licensing, LLC

Disclosed herein are innovations in decoding compressed video media data. The disclosed innovations facilitate decoding operations with improved computational efficiency, faster speeds, reduced power, reduced memory usage, and/or reduced latency. In one embodiment, for example, an encoded bitstream of video media data is input from an external video content provider, the encoded bitstream being encoded according to a video codec standard. A decoder is then configured to decode the encoded bitstream based at least in part on supplemental information that identifies a property of the encoded bitstream but that is supplemental to the encoded bitstream (e.g., supplemental information that is not part of the encoded bitstream or its associated media container and that is specific (or related) to the application for which the bitstream is used and/or the standard by which the bitstream is encoded and/or encrypted). 1. A video decoding system , comprising:a decoder configured to perform video decoding operations; and input an encoded bitstream of video media data from a video content provider;', 'input supplemental data from the video content provider that is separate from the encoded bitstream and that specifies a first bitstream property, a syntax element in the encoded bitstream specifying a second bitstream property that at least in part contradicts the first bitstream property; and', 'set a performance characteristic of the decoder using the first bitstream property without using the second bitstream property, thereby overriding the syntax element from the encoded bitstream., 'a decoder controller configured to operate the decoder, the decoder controller being further configured to2. The video decoding system of claim 1 , wherein the video decoding system is part of an operating system and communicates with an application written by the video content provider via an application program interface for the operating system.3. The video decoding system of claim 2 , ...

Подробнее
22-09-2016 дата публикации

STANDARD-GUIDED VIDEO DECODING PERFORMANCE ENHANCEMENTS

Номер: US20160277769A1
Принадлежит: Microsoft Technology Licensing, LLC

Disclosed herein are innovations in decoding compressed video media data. The disclosed innovations facilitate decoding operations with improved computational efficiency, faster speeds, reduced power, reduced memory usage, and/or reduced latency. In one embodiment, for example, an encoded bitstream of video media data is input from an external video content provider, the encoded bitstream being encoded according to a video codec standard. A decoder is then configured to decode the encoded bitstream based at least in part on supplemental information that identifies a property of the encoded bitstream but that is supplemental to the encoded bitstream (e.g., supplemental information that is not part of the encoded bitstream or its associated media container and that is specific (or related) to the application for which the bitstream is used and/or the standard by which the bitstream is encoded and/or encrypted). 1. A video decoding system , comprising:a decoder configured to perform video decoding operations; and input a media file from an external video content provider, the media file arranged according to a media file format standard and comprising an encoded bitstream encoded according to a video codec standard;', 'input supplemental information that is separate from the media file and provides an identity of the media file format standard by which the media file was assembled; and', 'modify, based at least in part on the supplemental information, a decoding process performed by the decoder to decode the encoded bitstream., 'a decoder controller configured to operate the decoder, the decoder controller being further configured to2. The video decoding system of claim 1 , wherein the modifying reduces start code search time in the decoding process performed by the decoder.3. The video decoding system of claim 1 , wherein the modifying comprises modifying the decoding process performed by the decoder to:identify a network abstraction layer unit (“NALU”) length from ...

Подробнее
18-12-2014 дата публикации

Remultiplexing Bitstreams of Encoded Video for Video Playback

Номер: US20140369422A1
Принадлежит:

An encoded bitstream is processed without re-encoding so as to recombine multiple packets of each image into contiguous data of one packet for the image. Each packet is assigned a presentation time stamp, corresponding to the display order of its image in the sequence of images. In one embodiment, each intra-frame compressed image also is marked as a recovery point indicating that a decompression processor empties its buffers of data for prior groups of pictures before processing the image. A video editing or other playback application uses the converted bitstream for scrubbing and similar playback operations. 1. A computer-implemented process performed by a processor in a computer , comprising:receiving, into memory, an original bitstream of video data wherein, for each image in a sequence of images, the bitstream includes a plurality of packets of data including compressed data for the image;processing the bitstream of video data to gather the compressed video data for each image;forming a single packet comprising contiguous compressed video data for each image, wherein the single packet further has an associated presentation time stamp for the image; andstoring the packets for the images as a converted bitstream in a data file format for use in playback.2. The computer-implemented process of claim 1 , further comprising marking each intraframe compressed image in the converted bitstream as a recovery point indicating a decompression processor empties buffers of data from prior groups of pictures before processing the image.3. The computer-implemented process of claim 1 , wherein the original bitstream is compliant with an MPEG-2 transport stream file format.4. The computer-implemented process of claim 1 , wherein the compressed data is compliant with H.264/AVC standard.5. The computer-implemented process of claim 1 , wherein scrubbing playback by the video editing application uses the converted bitstream.6. The computer-implemented process of claim 1 , wherein ...

Подробнее
08-10-2015 дата публикации

Adaptive quantization for video rate control

Номер: US20150288965A1
Принадлежит: Microsoft Corp

According to a first aspect of the innovations described herein video encoding, such as game video encoding, is improved with a goal to generate substantially constant video quality and the average target bitrate within a desired tolerance, which improves an overall user experience on video playback. An adaptive solution uses intelligent bias on bit allocation and quantization decisions, locally within a frame and globally across different frames, based on a current quality level and within an allowed bitrate variable tolerance. Bit allocation is increased on high complexity frames and redundant bits are avoided, which might have been wasted for static scenes and low complexity aspects. Statistics can be used from the encoding process. The solution can address similar video coding quality problems for video game recording on a variety of gaming platforms.

Подробнее
05-09-2019 дата публикации

Supplemental enhancement information including confidence level and mixed content information

Номер: US20190273927A1
Принадлежит: Microsoft Technology Licensing LLC

This application relates to video encoding and decoding, and specifically to tools and techniques for using and providing supplemental enhancement information in bitstreams. Among other things, the detailed description presents innovations for bitstreams having supplemental enhancement information (SEI). In particular embodiments, the SEI message includes picture source data (e.g., data indicating whether the associated picture is a progressive scan picture or an interlaced scan picture and/or data indicating whether the associated picture is a duplicate picture). The SEI message can also express a confidence level of the encoder's relative confidence in the accuracy of this picture source data. A decoder can use the confidence level indication to determine whether the decoder should separately identify the picture as progressive or interlaced and/or a duplicate picture or honor the picture source scanning information in the SEI as it is.

Подробнее
06-10-2016 дата публикации

PERFORMING PROCESSING-INTENSIVE OPERATION ON MULTI-TASKING LIMITED-CAPACITY DEVICES

Номер: US20160293212A1
Принадлежит:

A facility for completing a set of operations is described. Under the control of an application, the facility registers the background task to perform the set of operations. In response to the registration of the background task, the facility repeatedly invokes the background task to perform the set of operations. 1. A method in a computing system configured to generate a video sequence , the method comprising:under the control of a video-editing application, registering a background task of the application to generate a video sequence based upon a video composition generated with the video-editing application;in response to the registration of the background task, repeatedly invoking the background task to generate the video sequence; andin response to the background task completing generation of the video sequence, notifying the video-editing application.2. The method of claim 1 , further comprising claim 1 , under the control of the video-editing application claim 1 , presenting the generated video sequence.31. The method of claim 1 , further comprising claim 1 , under the control of the video-editing application claim 1 , sharing the generated video sequence from a first user to one or more second users.4. The method of claim 1 , further comprising: constructing the portion of the generated video sequence; and', 'persistently storing the constructed portion of the generated video sequence., 'in the background task, for each of a plurality of time-contiguous portions of the generated video sequence5. The method of wherein claim 4 , during construction of a distinguished portion of the generated video sequence claim 4 , before persistently storing any of the distinguished portion of the generated video sequence claim 4 , processing of the background task is interrupted claim 4 ,and wherein, in response to the background task again being invoked after interruption of the processing of the background task, in the background task, the distinguished portion of the ...

Подробнее
06-10-2016 дата публикации

DIGITAL CONTENT STREAMING FROM DIGITAL TV BROADCAST

Номер: US20160295256A1
Принадлежит: Microsoft Technology Licensing, LLC

Techniques are described for remuxing multimedia content received in a digital video broadcasting format without performing transcoding of the video and/or audio content. For example, a computing device with a digital television tuner can receive multimedia content in a digital video broadcast format. The computing device can remux the received multimedia content from the digital video broadcasting format in which the multimedia content is received into a target streaming protocol for streaming to other devices. Remuxing operations can comprise demultiplexing the received multimedia content to separate the audio and video content, performing meta-data reconstruction, and multiplexing the audio and video content into a target stream using a target streaming protocol format. 1. A computing device comprising:a processing unit;memory; andan antenna configured for receiving digital video broadcast television signals; receiving, via the antenna, the multimedia content in a digital video broadcasting format, the multimedia content comprising audio content and video content;', 'determining a target streaming protocol;', 'demultiplexing the multimedia content in the digital video broadcasting format to separate the audio content and the video content;', performing meta-data reconstruction for the video content based, at least in part, on the target streaming protocol; and', 'multiplexing the video content in a target stream according to the target streaming protocol using the reconstructed meta-data and without transcoding the video content;, 'for the video content, when an audio coding format of the audio content is compatible with a target computing device, multiplexing the audio content in the target stream according to the target streaming protocol without transcoding the audio content; and', 'otherwise, when the audio coding format of the audio content is not compatible with the target computing device, transcoding the audio content to a different audio coding format ...

Подробнее
20-10-2016 дата публикации

SPLIT PROCESSING OF ENCODED VIDEO IN STREAMING SEGMENTS

Номер: US20160308931A1
Принадлежит: Microsoft Technology Licensing, LLC

Techniques are described for split processing of streaming segments in which processing operations are split between a source component and a decoder component. For example, the source component can perform operations for receiving a streaming segment, demultiplexing the streaming segment to separate a video content bit stream, scanning the video content bit stream to find a location at which decoding can begin (e.g., scanning up to a first decodable I-picture, for which header parameter sets are available for decoding), and send the video content bit stream to the decoder component beginning at the location (e.g., the first decodable I-picture). The decoder component can begin decoding at the identified location (e.g., the first decodable I-picture). The decoder component can also discard subsequent pictures that reference a reference picture not present in the video content bit stream (e.g., when decoding starts with a new streaming segment). 1. A computing device comprising:a processing unit;memory;a source component; anda decoder component; [ receiving a streaming segment, the streaming segment comprising a video content bit stream;', 'demultiplexing the streaming segment to separate the video content bit stream;', 'scanning the video content bit stream up to a first decodable I-picture; and', 'sending the video content bit stream beginning with the first decodable I-picture to the decoder component, wherein bits in the video content bit stream before the first decodable I-picture are not sent to the decoder component; and, 'by the source component, decoding video content using the video content bit stream received from the source component beginning with the first decodable I-picture;', 'during the decoding of the video content, discarding any picture referencing a reference picture prior to the first decodable I-picture; and', 'outputting the decoded video content., 'by the decoder component], 'the processing unit configured to perform operations for split ...

Подробнее