Настройки

Укажите год
-

Небесная энциклопедия

Космические корабли и станции, автоматические КА и методы их проектирования, бортовые комплексы управления, системы и средства жизнеобеспечения, особенности технологии производства ракетно-космических систем

Подробнее
-

Мониторинг СМИ

Мониторинг СМИ и социальных сетей. Сканирование интернета, новостных сайтов, специализированных контентных площадок на базе мессенджеров. Гибкие настройки фильтров и первоначальных источников.

Подробнее

Форма поиска

Поддерживает ввод нескольких поисковых фраз (по одной на строку). При поиске обеспечивает поддержку морфологии русского и английского языка
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Укажите год
Укажите год

Применить Всего найдено 146. Отображено 146.
08-11-2022 дата публикации

Controller with caching and non-caching modes

Номер: US0011494224B2

An apparatus includes a CPU core, a first cache subsystem coupled to the CPU core, and a second memory coupled to the cache subsystem. The first cache subsystem includes a configuration register, a first memory, and a controller. The controller is configured to: receive a request directed to an address in the second memory and, in response to the configuration register having a first value, operate in a non-caching mode. In the non-caching mode, the controller is configured to provide the request to the second memory without caching data returned by the request in the first memory. In response to the configuration register having a second value, the controller is configured to operate in a caching mode. In the caching mode the controller is configured to provide the request to the second memory and cache data returned by the request in the first memory.

Подробнее
31-08-2021 дата публикации

Hardware coherence for memory controller

Номер: US0011106584B2

A system includes a non-coherent component; a coherent, non-caching component; a coherent, caching component; and a level two (L2) cache subsystem coupled to the non-coherent component, the coherent, non-caching component, and the coherent, caching component. The L2 cache subsystem includes a L2 cache; a shadow level one (L1) main cache; a shadow L1 victim cache; and a L2 controller. The L2 controller is configured to receive and process a first transaction from the non-coherent component; receive and process a second transaction from the coherent, non-caching component; and receive and process a third transaction from the coherent, caching component.

Подробнее
04-06-2024 дата публикации

Write control for read-modify-write operations in cache memory

Номер: US0012001282B2
Принадлежит: Texas Instruments Incorporated

In described examples, a processor system includes a processor core that generates memory write requests, and a cache memory with a memory controller having a memory pipeline. The cache memory has cache lines of length L. The cache memory has a minimum write length that is less than a cache line length of the cache memory. The memory pipeline determines whether the data payload includes a first chunk and ECC syndrome that correspond to a partial write and are writable by a first cache write operation, and a second chunk and ECC syndrome that correspond to a full write operation that can be performed separately from the first cache write operation. The memory pipeline performs an RMW operation to store the first chunk and ECC syndrome in the cache memory, and performs the full write operation to store the second chunk and ECC syndrome in the cache memory.

Подробнее
03-03-2015 дата публикации

Asynchronous clock dividers to reduce on-chip variations of clock timing

Номер: US0008970267B2

This invention is a means to definitively establish the occurrence of various clock edges used in a design, balancing clock edges at various locations within an integrated circuit. Clocks entering from outside sources can be a source of on-chip-variations (OCV) resulting in unacceptable clock edge skewing. The present invention arranges placement of the various clock dividers on the chip at remote locations where these clocks are used. This minimizes the uncertainty of the edge occurrence.

Подробнее
30-12-2021 дата публикации

CACHE MANAGEMENT OPERATIONS USING STREAMING ENGINE

Номер: US20210406014A1
Принадлежит:

A stream of data is accessed from a memory system using a stream of addresses generated in a first mode of operating a streaming engine in response to executing a first stream instruction. A block cache management operation is performed on a cache in the memory using a block of addresses generated in a second mode of operating the streaming engine in response to executing a second stream instruction.

Подробнее
26-11-2020 дата публикации

PIPELINED READ-MODIFY-WRITE OPERATIONS IN CACHE MEMORY

Номер: US20200371877A1
Принадлежит: Texas Instruments Inc

In described examples, a processor system includes a processor core that generates memory write requests, a cache memory, and a memory pipeline of the cache memory. The memory pipeline has a holding buffer, an anchor stage, and an RMW pipeline. The anchor stage determines whether a data payload of a write request corresponds to a partial write. If so, the data payload is written to the holding buffer and conforming data is read from a corresponding cache memory address to merge with the data payload. The RMW pipeline has a merge stage and a syndrome generation stage. The merge stage merges the data payload in the holding buffer with the conforming data to make merged data. The syndrome generation stage generates an ECC syndrome using the merged data. The memory pipeline writes the data payload and ECC syndrome to the cache memory.

Подробнее
10-12-2013 дата публикации

Efficient cache allocation by optimizing size and order of allocate commands based on bytes required by CPU

Номер: US0008607000B2

This invention is a data processing system having a multi-level cache system. The multi-level cache system includes at least first level cache and a second level cache. Upon a cache miss in both the at least one first level cache and the second level cache the data processing system evicts and allocates a cache line within the second level cache. The data processing system determine from the miss address whether the request falls within a low half or a high half of the allocated cache line. The data processing system first requests data from external memory of the miss half cache line. Upon receipt data is supplied to the at least one first level cache and the CPU. The data processing system then requests data from external memory for the other half of the second level cache line.

Подробнее
26-07-2012 дата публикации

PERFORMANCE AND POWER IMPROVEMENT ON DMA WRITES TO LEVEL TWO COMBINED CACHE/SRAM THAT IS CAUSED IN LEVEL ONE DATA CACHE AND LINE IS VALID AND DIRTY

Номер: US20120191914A1
Принадлежит: TEXAS INSTRUMENTS INCORPORATED

This invention optimizes DMA writes to directly addressable level two memory that is cached in level one and the line is valid and dirty. When the level two controller detects that a line is valid and dirty in level one, the level two memory need not update its copy of the data. Level one memory will replace the level two copy with a victim writeback at a future time. Thus the level two memory need not store write a copy. This limits the number of DMA writes to level two directly addressable memory and thus improves performance and minimizes dynamic power. This also frees the level two memory for other master/requestors.

Подробнее
04-10-2022 дата публикации

Pipeline arbitration

Номер: US0011461127B2

A method includes receiving, by a first stage in a pipeline, a first transaction from a previous stage in pipeline; in response to first transaction comprising a high priority transaction, processing high priority transaction by sending high priority transaction to a buffer; receiving a second transaction from previous stage; in response to second transaction comprising a low priority transaction, processing low priority transaction by monitoring a full signal from buffer while sending low priority transaction to buffer; in response to full signal asserted and no high priority transaction being available from previous stage, pausing processing of low priority transaction; in response to full signal asserted and a high priority transaction being available from previous stage, stopping processing of low priority transaction and processing high priority transaction; and in response to full signal being de-asserted, processing low priority transaction by sending low priority transaction to ...

Подробнее
20-05-2014 дата публикации

Enhanced pipelining and multi-buffer architecture for level two cache controller to minimize hazard stalls and optimize performance

Номер: US0008732398B2

This invention is a data processing system including a central processing unit, an external interface, a level one cache, level two memory including level two unified cache and directly addressable memory. A level two memory controller includes a directly addressable memory read pipeline, a central processing unit write pipeline, an external cacheable pipeline and an external non-cacheable pipeline.

Подробнее
03-01-2019 дата публикации

Lookahead Priority Collection to Support Priority Elevation

Номер: US20190004967A1
Принадлежит: Texas Instruments Inc

A queuing requester for access to a memory system is provided. Transaction requests are received from two or more requestors for access to the memory system. Each transaction request includes an associated priority value. A request queue of the received transaction requests is formed in the queuing requester. Each transaction request includes an associated priority value. A highest priority value of all pending transaction requests within the request queue is determined. An elevated priority value is selected when the highest priority value is higher than the priority value of an oldest transaction request in the request queue; otherwise the priority value of the oldest transaction request is selected. The oldest transaction request in the request queue with the selected priority value is then provided to the memory system. An arbitration contest with other requesters for access to the memory system is performed using the selected priority value.

Подробнее
27-12-2022 дата публикации

Lookahead priority collection to support priority elevation

Номер: US0011537532B2

A queuing requester for access to a memory system is provided. Transaction requests are received from two or more requestors for access to the memory system. Each transaction request includes an associated priority value. A request queue of the received transaction requests is formed in the queuing requester. Each transaction request includes an associated priority value. A highest priority value of all pending transaction requests within the request queue is determined. An elevated priority value is selected when the highest priority value is higher than the priority value of an oldest transaction request in the request queue; otherwise the priority value of the oldest transaction request is selected. The oldest transaction request in the request queue with the selected priority value is then provided to the memory system. An arbitration contest with other requesters for access to the memory system is performed using the selected priority value.

Подробнее
29-03-2016 дата публикации

Performance and power improvement on DMA writes to level two combined cache/SRAM that is cached in level one data cache and line is valid and dirty

Номер: US0009298643B2

This invention optimizes DMA writes to directly addressable level two memory that is cached in level one and the line is valid and dirty. When the level two controller detects that a line is valid and dirty in level one, the level two memory need not update its copy of the data. Level one memory will replace the level two copy with a victim writeback at a future time. Thus the level two memory need not store write a copy. This limits the number of DMA writes to level two directly addressable memory and thus improves performance and minimizes dynamic power. This also frees the level two memory for other master/requestors.

Подробнее
26-07-2012 дата публикации

EFFICIENT LEVEL TWO MEMORY BANKING TO IMPROVE PERFORMANCE FOR MULTIPLE SOURCE TRAFFIC AND ENABLE DEEPER PIPELINING OF ACCESSES BY REDUCING BANK STALLS

Номер: US20120191915A1
Принадлежит: TEXAS INSTRUMENTS INCORPORATED

The level two memory of this invention supports coherency data transfers with level one cache and DMA data transfers. The width of DMA transfers is 16 bytes. The width of level one instruction cache transfers is 32 bytes. The width of level one data transfers is 64 bytes. The width of level two allocates is 128 bytes. DMA transfers are interspersed with CPU traffic and have similar requirements of efficient throughput and reduced latency. An additional challenge is that these two data streams (CPU and DMA) require access to the level two memory at the same time. This invention is a banking technique for the level two memory to facilitate efficient data transfers.

Подробнее
08-06-2023 дата публикации

PREFETCH MANAGEMENT IN A HIERARCHICAL CACHE SYSTEM

Номер: US20230176975A1
Принадлежит:

An apparatus includes a CPU core, a first memory cache with a first line size, and a second memory cache having a second line size larger than the first line size. Each line of the second memory cache includes an upper half and a lower half. A memory controller subsystem is coupled to the CPU core and to the first and second memory caches. Upon a miss in the first memory cache for a first target address, the memory controller subsystem determines that the first target address resulting in the miss maps to the lower half of a line in the second memory cache, retrieves the entire line from the second memory cache, and returns the entire line from the second memory cache to the first memory cache.

Подробнее
26-01-2016 дата публикации

Zero cycle clock invalidate operation

Номер: US0009244837B2

A method to eliminate the delay of a block invalidate operation in a multi CPU environment by overlapping the block invalidate operation with normal CPU accesses, thus making the delay transparent. A range check is performed on each CPU access while a block invalidate operation is in progress, and an access that maps to within the address range of the block invalidate operation is treated as a cache miss to ensure that the requesting CPU will receive valid data.

Подробнее
24-02-2022 дата публикации

PREFETCH MANAGEMENT IN A HIERARCHICAL CACHE SYSTEM

Номер: US20220058127A1
Принадлежит: Texas Instruments Inc

An apparatus includes a CPU core, a first memory cache with a first line size, and a second memory cache having a second line size larger than the first line size. Each line of the second memory cache includes an upper half and a lower half. A memory controller subsystem is coupled to the CPU core and to the first and second memory caches. Upon a miss in the first memory cache for a first target address, the memory controller subsystem determines that the first target address resulting in the miss maps to the lower half of a line in the second memory cache, retrieves the entire line from the second memory cache, and returns the entire line from the second memory cache to the first memory cache.

Подробнее
07-10-2014 дата публикации

Hazard prevention for data conflicts between level one data cache line allocates and snoop writes

Номер: US0008856446B2

A comparator compares the address of DMA writes in the final entry of the FIFO stack to all pending read addresses in a monitor memory. If there is no match, then the DMA access is permitted to proceed. If the DMA write is to a cache line with a pending read, the DMA write access is stalled together with any DMA accesses behind the DMA write in the FIFO stack. DMA read accesses are not compared but may stall behind a stalled DMA write access. These stalls occur if the cache read was potentially cacheable. This is possible for some monitored accesses but not all. If a DMA write is stalled, the comparator releases it to complete once there are no pending reads to the same cache line.

Подробнее
02-08-2012 дата публикации

Priority Based Exception Mechanism for Multi-Level Cache Controller

Номер: US20120198272A1
Принадлежит: TEXAS INSTRUMENTS INCORPORATED

This invention is an exception priority arbitration unit which prioritizes memory access permission fault and data exception signals according to a fixed hierarchy if received during a same cycle. A CPU memory access permission fault is prioritized above a DMA memory access permission fault of a direct memory access permission fault. Any memory access permission fault is prioritized above a data exception signal. A non-correctable data exception signal is prioritized above a correctable data exception signal.

Подробнее
08-04-2014 дата публикации

Clock control of pipelined memory for improved delay fault testing

Номер: US0008694843B2

In an embodiment of the invention, a pipelined memory bank is tested by scanning test patterns into an integrated circuit. Test data is formed from the test patterns and shifted into a scan-in chain in the pipelined memory bank. The test data in the scan-in chain is launched into the inputs of the pipelined memory bank during a first clock cycle. Data from the outputs of the pipelined memory bank is captured in a scan-out chain during a second cycle where the time between the first and second clock cycles is equal to or greater than the read latency of the memory bank.

Подробнее
31-01-2023 дата публикации

Prefetch management in a hierarchical cache system

Номер: US0011567874B2

An apparatus includes a CPU core, a first memory cache with a first line size, and a second memory cache having a second line size larger than the first line size. Each line of the second memory cache includes an upper half and a lower half. A memory controller subsystem is coupled to the CPU core and to the first and second memory caches. Upon a miss in the first memory cache for a first target address, the memory controller subsystem determines that the first target address resulting in the miss maps to the lower half of a line in the second memory cache, retrieves the entire line from the second memory cache, and returns the entire line from the second memory cache to the first memory cache.

Подробнее
25-02-2014 дата публикации

Efficient level two memory banking to improve performance for multiple source traffic and enable deeper pipelining of accesses by reducing bank stalls

Номер: US0008661199B2

The level two memory of this invention supports coherency data transfers with level one cache and DMA data transfers. The width of DMA transfers is 16 bytes. The width of level one instruction cache transfers is 32 bytes. The width of level one data transfers is 64 bytes. The width of level two allocates is 128 bytes. DMA transfers are interspersed with CPU traffic and have similar requirements of efficient throughput and reduced latency. An additional challenge is that these two data streams (CPU and DMA) require access to the level two memory at the same time. This invention is a banking technique for the level two memory to facilitate efficient data transfers.

Подробнее
19-04-2022 дата публикации

Tag update bus for updated coherence state

Номер: US0011307987B2

An apparatus includes a CPU core and a L1 cache subsystem including a L1 main cache, a L1 victim cache, and a L1 controller. The apparatus includes a L2 cache subsystem coupled to the L1 cache subsystem by a transaction bus and a tag update bus. The L2 cache subsystem includes a L2 main cache, a shadow L1 main cache, a shadow L1 victim cache, and a L2 controller. The L2 controller receives a message from the L1 controller over the tag update bus, including a valid signal, an address, and a coherence state. In response to the valid signal being asserted, the L2 controller identifies an entry in the shadow L1 main cache or the shadow L1 victim cache having an address corresponding to the address of the message and updates a coherence state of the identified entry to be the coherence state of the message.

Подробнее
07-12-2023 дата публикации

ERROR CORRECTING CODES FOR MULTI-MASTER MEMORY CONTROLLER

Номер: US20230393933A1
Принадлежит: Texas Instruments Inc

A system includes a memory controller, multiple memories coupled to the memory controller, and multiple controlling components coupled to the memory controller. The memory controller calculates an error correction code (ECC) syndrome of a first type for a segment of data; stores the segment of data and the ECC syndrome of the first type in a first memory of the multiple memories; receives a request from a controlling component of the multiple controlling components directed to the segment of data, the controlling component implementing an ECC syndrome of a second type; transforms the ECC syndrome of the first type for the segment of data to the ECC syndrome of the second type; detects a number of errors, if any, present in the segment of data; and takes further action depending on how many errors are detected.

Подробнее
14-11-2023 дата публикации

Cache size change

Номер: US0011816032B2

A method includes determining, by a level one (L1) controller, to change a size of a L1 main cache; servicing, by the L1 controller, pending read requests and pending write requests from a central processing unit (CPU) core; stalling, by the L1 controller, new read requests and new write requests from the CPU core; writing back and invalidating, by the L1 controller, the L1 main cache. The method also includes receiving, by a level two (L2) controller, an indication that the L1 main cache has been invalidated and, in response, flushing a pipeline of the L2 controller; in response to the pipeline being flushed, stalling, by the L2 controller, requests received from any master; reinitializing, by the L2 controller, a shadow L1 main cache. Reinitializing includes clearing previous contents of the shadow L1 main cache and changing the size of the shadow L1 main cache.

Подробнее
02-08-2012 дата публикации

Hazard Prevention for Data Conflicts Between Level One Data Cache Line Allocates and Snoop Writes

Номер: US20120198162A1
Принадлежит: TEXAS INSTRUMENTS INCORPORATED

A comparator compares the address of DMA writes in the final entry of the FIFO stack to all pending read addresses in a monitor memory. If there is no match, then the DMA access is permitted to proceed. If the DMA write is to a cache line with a pending read, the DMA write access is stalled together with any DMA accesses behind the DMA write in the FIFO stack. DMA read accesses are not compared but may stall behind a stalled DMA write access. These stalls occur if the cache read was potentially cacheable. This is possible for some monitored accesses but not all. If a DMA write is stalled, the comparator releases it to complete once there are no pending reads to the same cache line. 1. A data processing system comprising:a central processing unit executing program instructions to manipulate data;a data cache connected to said central processing unit temporarily storing in a plurality of cache lines data for manipulation by said central processing unit;a direct memory access unit connected to said central processing unit controlling data transfer, said direct memory access unit operating under control of said central processing unit to control data transfers; a cache read monitor including a plurality of entries, each entry storing an address of a cache read operation in progress, an entry initialized upon a cache read request and extinguished upon supply of data in response to said corresponding cache read request,', 'a direct memory access first-in-first-out stack including a plurality of entries, each entry storing an address of a corresponding direct memory access request to said data cache and an indication whether said direct memory access request is a read request or a write request, an entry initialized upon receipt of a direct memory access request and extinguished upon completion of said direct memory access request, and', 'a comparator connected to said cache read monitor and said direct memory access first-in-first-out stack for comparing an address of ...

Подробнее
05-10-2021 дата публикации

Memory pipeline control in a hierarchical memory system

Номер: US0011138117B2

In described examples, a processor system includes a processor core generating memory transactions, a lower level cache memory with a lower memory controller, and a higher level cache memory with a higher memory controller having a memory pipeline. The higher memory controller is connected to the lower memory controller by a bypass path that skips the memory pipeline. The higher memory controller: determines whether a memory transaction is a bypass write, which is a memory write request indicated not to result in a corresponding write being directed to the higher level cache memory; if the memory transaction is determined a bypass write, determines whether a memory transaction that prevents passing is in the memory pipeline; and if no transaction that prevents passing is determined to be in the memory pipeline, sends the memory transaction to the lower memory controller using the bypass path.

Подробнее
07-02-2013 дата публикации

Clock Control of Pipelined Memory for Improved Delay Fault Testing

Номер: US20130036337A1
Принадлежит: TEXAS INSTRUMENTS INCORPORATED

In an embodiment of the invention, a pipelined memory bank is tested by scanning test patterns into an integrated circuit. Test data is formed from the test patterns and shifted into a scan-in chain in the pipelined memory bank. The test data in the scan-in chain is launched into the inputs of the pipelined memory bank during a first clock cycle. Data from the outputs of the pipelined memory bank is captured in a scan-out chain during a second cycle where the time between the first and second clock cycles is equal to or greater than the read latency of the memory bank.

Подробнее
14-04-2015 дата публикации

Non-blocking, pipelined write allocates with allocate data merging in a multi-level cache system

Номер: US0009009408B2

This invention handles write request cache misses. The cache controller stores write data, sends a read request to external memory for a corresponding cache line, merges the write data with data returned from the external memory and stores merged data in the cache. The cache controller includes buffers with plural entries storing the write address, the write data, the position of the write data within a cache line and unique identification number. This stored data enables the cache controller to proceed to servicing other access requests while waiting for response from the external memory.

Подробнее
05-12-2013 дата публикации

SYSTEM AND METHOD OF OPTIMIZED USER COHERENCE FOR A CACHE BLOCK WITH SPARSE DIRTY LINES

Номер: US20130326155A1
Принадлежит: Texas Instruments Incorporated

A system and method of optimized user coherence for a cache block with sparse dirty lines is disclosed wherein valid and dirty bits of each set are logically AND'ed together and the result for multiple sets are logically OR'ed together resulting in an indication whether a particular block has any dirty lines. If the result indicates that a block does not have dirty lines, then that entire block can be skipped from being written back without affecting coherency. 1. Controller circuitry coupled to a cache having a plurality of blocks , each block having a plurality of sets with valid and dirty status bits associated with a cache line within a block , the controller circuitry comprising:(a) logical AND circuitry having a plurality of inputs coupled to the valid and dirty bits of each set and a plurality of outputs for providing a logical AND of the valid and dirty bits of each set; and,(b) logical OR circuitry having a plurality of inputs coupled to the plurality of outputs of the logical AND circuitry, for indicating whether a particular block has any dirty lines.2. The controller circuitry of wherein if the logical OR circuitry indicates that a particular block does not have dirty lines claim 1 , then that particular block can skip writeback to main memory without affecting coherency claim 1 , otherwise claim 1 , detect circuitry checks each output of the logical AND circuitry for a particular block to identify a dirty cache line to be written back to main memory to maintain coherency.3. The controller circuitry of wherein the cache is an instruction cache.4. The controller circuitry of wherein the cache is a data cache.5. The controller circuitry of wherein the cache is a level one cache.6. The controller circuitry of wherein the cache is a level two cache.7. A method of optimized user coherence for a cache block in a cache having a plurality of sets for holding cache lines claim 1 , comprising steps of:(a) logically AND'ing valid and dirty status bits of each set ...

Подробнее
24-03-2020 дата публикации

Cache management operations using streaming engine

Номер: US0010599433B2

A stream of data is accessed from a memory system using a stream of addresses generated in a first mode of operating a streaming engine in response to executing a first stream instruction. A block cache management operation is performed on a cache in the memory using a block of addresses generated in a second mode of operating the streaming engine in response to executing a second stream instruction.

Подробнее
26-11-2020 дата публикации

MEMORY PIPELINE CONTROL IN A HIERARCHICAL MEMORY SYSTEM

Номер: US20200371937A1
Принадлежит: Texas Instruments Inc

In described examples, a processor system includes a processor core generating memory transactions, a lower level cache memory with a lower memory controller, and a higher level cache memory with a higher memory controller having a memory pipeline. The higher memory controller is connected to the lower memory controller by a bypass path that skips the memory pipeline. The higher memory controller: determines whether a memory transaction is a bypass write, which is a memory write request indicated not to result in a corresponding write being directed to the higher level cache memory; if the memory transaction is determined a bypass write, determines whether a memory transaction that prevents passing is in the memory pipeline; and if no transaction that prevents passing is determined to be in the memory pipeline, sends the memory transaction to the lower memory controller using the bypass path.

Подробнее
19-03-2020 дата публикации

PREFETCH KILL AND REVIVAL IN AN INSTRUCTION CACHE

Номер: US20200089622A1
Принадлежит:

A system comprises a processor including a CPU core, first and second memory caches, and a memory controller subsystem. The memory controller subsystem speculatively determines a hit or miss condition of a virtual address in the first memory cache and speculatively translates the virtual address to a physical address. Associated with the hit or miss condition and the physical address, the memory controller subsystem configures a status to a valid state. Responsive to receipt of a first indication from the CPU core that no program instructions associated with the virtual address are needed, the memory controller subsystem reconfigures the status to an invalid state and, responsive to receipt of a second indication from the CPU core that a program instruction associated with the virtual address is needed, the memory controller subsystem reconfigures the status back to a valid state. 1. A data processing apparatus comprising:a memory; and receive a first address and a pre-fetch count value;', 'compute a second address based on the first address;', 'determine a hit/miss condition of the second address;', 'set a status of the second address to a valid state;', 'after setting the status of the second address to a valid state, determine whether the pre-fetch count value is zero; and', 'change the status of the second address to an invalid state when the pre-fetch count value is zero., 'a memory controller coupled to the memory and configured to2. The data processing apparatus of claim 1 , wherein the memory controller includes a register and setting the status of the second address to valid includes storing a first value that corresponds to a valid state to a first field of the register.3. The data processing apparatus of claim 2 , wherein the first field is a single bit of the register.4. The data processing apparatus of claim 2 , wherein the memory controller is configured to store the hit/miss condition of the second address into a second field of the register and store ...

Подробнее
20-02-2020 дата публикации

PREFETCH MANAGEMENT IN A HIERARCHICAL CACHE SYSTEM

Номер: US20200057720A1
Принадлежит:

An apparatus includes a CPU core, a first memory cache with a first line size, and a second memory cache having a second line size larger than the first line size. Each line of the second memory cache includes an upper half and a lower half. A memory controller subsystem is coupled to the CPU core and to the first and second memory caches. Upon a miss in the first memory cache for a first target address, the memory controller subsystem determines that the first target address resulting in the miss maps to the lower half of a line in the second memory cache, retrieves the entire line from the second memory cache, and returns the entire line from the second memory cache to the first memory cache. 1. (canceled)2. The apparatus of claim 3 , wherein the second line size is twice the first line size.3. An apparatus claim 3 , comprising:a central processing unit (CPU) core;a first memory cache to store instructions for execution by the CPU core, the first memory cache having a first line size;a second memory cache to store instructions for execution by the CPU core, the second memory cache having a second line size, the second line size being larger than the first line size, each line of the second memory cache comprising an upper half and a lower half; and [ determine that the first target address maps to the lower half of a first line in the second memory cache;', 'retrieve the entire first line from the second memory cache; and', 'return the entire first line from the second memory cache to the first memory cache; and, 'upon a determination of a first miss in the first memory cache for a first target address, determine that the second target address maps to the upper half of a second line in the second memory cache; and', 'return to the first memory cache the upper half of the second line from the second memory cache and not the lower half of the second line from the second memory cache., 'upon a determination of a second miss in the first memory cache for a second ...

Подробнее
19-05-2022 дата публикации

PIPELINED READ-MODIFY-WRITE OPERATIONS IN CACHE MEMORY

Номер: US20220156149A1
Принадлежит: Texas Instruments Inc

In described examples, a processor system includes a processor core that generates memory write requests, a cache memory, and a memory pipeline of the cache memory. The memory pipeline has a holding buffer, an anchor stage, and an RMW pipeline. The anchor stage determines whether a data payload of a write request corresponds to a partial write. If so, the data payload is written to the holding buffer and conforming data is read from a corresponding cache memory address to merge with the data payload. The RMW pipeline has a merge stage and a syndrome generation stage. The merge stage merges the data payload in the holding buffer with the conforming data to make merged data. The syndrome generation stage generates an ECC syndrome using the merged data. The memory pipeline writes the data payload and ECC syndrome to the cache memory.

Подробнее
14-09-2021 дата публикации

Cache management operations using streaming engine

Номер: US0011119776B2

A stream of data is accessed from a memory system using a stream of addresses generated in a first mode of operating a streaming engine in response to executing a first stream instruction. A block cache management operation is performed on a cache in the memory using a block of addresses generated in a second mode of operating the streaming engine in response to executing a second stream instruction.

Подробнее
29-03-2012 дата публикации

Requester Based Transaction Status Reporting in a System with Multi-Level Memory

Номер: US20120079102A1
Принадлежит:

A system has memory resources accessible by a central processing unit (CPU). One or more transaction requests are initiated by the CPU for access to one or more of the memory resources. Initiation of transaction requests is ceased for a period of time. The memory resources are monitored to determine when all of the transaction requests initiated by the CPU have been completed. An idle signal accessible by the CPU is provided that is asserted when all of the transaction requests initiated by the CPU have been completed. 1. A method of operating a system having memory resources accessible by a central processing unit (CPU) , the method comprising:initiating one or more transaction requests by the CPU for access to one or more of the memory resources;ceasing initiation of transaction requests;monitoring the memory resources to determine when all of the transaction requests initiated by the CPU has been completed; andproviding an idle signal accessible by the CPU that is asserted when all of the transaction requests initiated by the CPU have been completed.2. The method of claim 1 , wherein ceasing initiation of transaction requests comprises stalling execution of instructions by the CPU until the idle signal is asserted and then resuming execution of instructions by the CPU.3. The method of claim 1 , wherein ceasing initiation of transaction requests comprises executing a software loop from a loop buffer within the CPU while monitoring the plurality of memory resources until the idle signal is asserted.4. The method of claim 1 , wherein assertion of the idle signal asserts an idle bit in a memory mapped register that is accessible by a memory access instruction executed by the CPU.5. The method of claim 4 , wherein ceasing initiation of transaction requests comprises executing a software loop from a loop buffer within the CPU while monitoring the plurality of memory resources until the idle signal is asserted claim 4 , wherein the software loop polls the idle bit until ...

Подробнее
12-10-2023 дата публикации

MULTI-LEVEL CACHE SECURITY

Номер: US20230325314A1
Принадлежит: Texas Instruments Inc

In described examples, a coherent memory system includes a central processing unit (CPU), and first and second level caches, each with a cache controller. The CPU is arranged to execute program instructions to manipulate data in at least a first or second secure context. Each of the first and second caches stores a secure code for indicating the secure context by which data for a respective cache line is received. The first and second level caches maintain coherency in response to comparing the secure codes of respective lines of cache and executing a cache coherency operation in response. A requestor coupled to the second level cache may send a coherence read transaction to the second level cache controller, which upon an affirmative security check, generates a snoop read transaction and sends the same to the first level cache.

Подробнее
23-02-2016 дата публикации

Level one data cache line lock and enhanced snoop protocol during cache victims and writebacks to maintain level one data cache and level two cache coherence

Номер: US0009268708B2
Принадлежит: TEXAS INSTRUMENTS INCORPORATED

This invention assures cache coherence in a multi-level cache system upon eviction of a higher level cache line. A victim buffer stored data on evicted lines. On a DMA access that may be cached in the higher level cache the lower level cache sends a snoop write. The address of this snoop write is compared with the victim buffer. On a hit in the victim buffer the write completes in the victim buffer. When the victim data passes to the next cache level it is written into a second victim buffer to be retired when the data is committed to cache. DMA write addresses are compared to addresses in this second victim buffer. On a match the write takes place in the second victim buffer. On a failure to match the controller sends a snoop write.

Подробнее
21-03-2023 дата публикации

Pipelined read-modify-write operations in cache memory

Номер: US0011609818B2
Принадлежит: Texas Instruments Incorporated

In described examples, a processor system includes a processor core that generates memory write requests, a cache memory, and a memory pipeline of the cache memory. The memory pipeline has a holding buffer, an anchor stage, and an RMW pipeline. The anchor stage determines whether a data payload of a write request corresponds to a partial write. If so, the data payload is written to the holding buffer and conforming data is read from a corresponding cache memory address to merge with the data payload. The RMW pipeline has a merge stage and a syndrome generation stage. The merge stage merges the data payload in the holding buffer with the conforming data to make merged data. The syndrome generation stage generates an ECC syndrome using the merged data. The memory pipeline writes the data payload and ECC syndrome to the cache memory.

Подробнее
04-08-2022 дата публикации

Cache Preload Operations Using Streaming Engine

Номер: US20220244957A1
Принадлежит:

A stream of data is accessed from a memory system using a stream of addresses generated in a first mode of operating a streaming engine in response to executing a first stream instruction. A block cache preload operation is performed on a cache in the memory using a block of addresses generated in a second mode of operating the streaming engine in response to executing a second stream instruction.

Подробнее
05-03-2024 дата публикации

Write streaming with cache write acknowledgment in a processor

Номер: US0011921637B2
Принадлежит: Texas Instruments Incorporated

In described examples, a processor system includes a processor core that generates memory write requests, a cache memory, and a memory controller. The memory controller has a memory pipeline. The memory controller is coupled to control the cache memory and communicatively coupled to the processor core. The memory controller is configured to receive the memory write requests from the processor core; schedule the memory write requests on the memory pipeline; and contemporaneously with scheduling respective ones of the memory write requests on the memory pipeline, send to the processor core a write acknowledgment confirming that writing of a data payload of the respective memory write request to the cache memory has completed.

Подробнее
02-08-2012 дата публикации

Efficient Cache Allocation by Optimizing Size and Order of Allocate Commands Based on Bytes Required by CPU

Номер: US20120198160A1
Принадлежит: TEXAS INSTRUMENTS INCORPORATED

This invention is a data processing system having a multi-level cache system. The multi-level cache system includes at least first level cache and a second level cache. Upon a cache miss in both the at least one first level cache and the second level cache the data processing system evicts and allocates a cache line within the second level cache. The data processing system determine from the miss address whether the request falls within a low half or a high half of the allocated cache line. The data processing system first requests data from external memory of the miss half cache line. Upon receipt data is supplied to the at least one first level cache and the CPU. The data processing system then requests data from external memory for the other half of the second level cache line. 1. A data processing system comprising:a central processing unit executing program instructions to manipulate data;at least one first level cache connected to said central processing unit temporarily storing in a plurality of cache lines at least one of program instructions for execution by said central processing unit and data for manipulation by said central processing unit, said at least one first level cache having a cache line size of less than or equal to N bits; anda second level cache connected to said first level cache temporarily storing in a plurality of cache lines at least one of program instructions for execution by said central processing unit and data for manipulation by said central processing unit, said second level cache having a cache line size of 2N bits; and evict and allocate a cache line within said second level cache to store said corresponding address,', 'determine from said corresponding address whether said request of said central processing unit falls within a low half or a high half of said allocated cache line,', 'if said request of said central processing unit falls within said low half of said allocated cache line (1) requesting first N bits from an ...

Подробнее
10-08-2023 дата публикации

PREFETCH KILL AND REVIVAL IN AN INSTRUCTION CACHE

Номер: US20230251975A1
Принадлежит: Texas Instruments Inc

A system comprises a processor including a CPU core, first and second memory caches, and a memory controller subsystem. The memory controller subsystem speculatively determines a hit or miss condition of a virtual address in the first memory cache and speculatively translates the virtual address to a physical address. Associated with the hit or miss condition and the physical address, the memory controller subsystem configures a status to a valid state. Responsive to receipt of a first indication from the CPU core that no program instructions associated with the virtual address are needed, the memory controller subsystem reconfigures the status to an invalid state and, responsive to receipt of a second indication from the CPU core that a program instruction associated with the virtual address is needed, the memory controller subsystem reconfigures the status back to a valid state.

Подробнее
23-05-2024 дата публикации

WRITE STREAMING WITH CACHE WRITE ACKNOWLEDGMENT IN A PROCESSOR

Номер: US20240168883A1
Принадлежит: Texas Instruments Inc

In described examples, a processor system includes a processor core that generates memory write requests, a cache memory, and a memory controller. The memory controller has a memory pipeline. The memory controller is coupled to control the cache memory and communicatively coupled to the processor core. The memory controller is configured to receive the memory write requests from the processor core; schedule the memory write requests on the memory pipeline; and contemporaneously with scheduling respective ones of the memory write requests on the memory pipeline, send to the processor core a write acknowledgment confirming that writing of a data payload of the respective memory write request to the cache memory has completed.

Подробнее
24-09-2015 дата публикации

PERFORMANCE AND POWER IMPROVEMENT ON DMA WRITES TO LEVEL TWO COMBINED CACHE/SRAM THAT IS CACHED IN LEVEL ONE DATA CACHE AND LINE IS VALID AND DIRTY

Номер: US20150269090A1
Принадлежит:

This invention optimizes DMA writes to directly addressable level two memory that is cached in level one and the line is valid and dirty. When the level two controller detects that a line is valid and dirty in level one, the level two memory need not update its copy of the data. Level one memory will replace the level two copy with a victim writeback at a future time. Thus the level two memory need not store write a copy. This limits the number of DMA writes to level two directly addressable memory and thus improves performance and minimizes dynamic power. This also frees the level two memory for other master/requestors. 14-. (canceled)5. A method of data processing comprising the steps of:temporarily storing in a plurality of first level cache lines data for manipulation by a central processing unit;storing for each first level each cache line a tag indicating a valid and a dirty status of corresponding data;temporarily storing in a plurality of second level cache lines data for manipulation by the central processing unit;storing for the second level cache a set of shadow tags corresponding to the tags of the first level cache;storing data in a second level memory directly addressable by the central processing unit;transferring data including transferring data into the second level directly addressable memory;determining from the shadow tags if the address of a data transfer into the second level directly addressable memory is cached in the first level cache;if said address of said data transfer into the second level directly addressable memory is cached in the first level cache, determining from said shadow tags if said data is valid and dirty in the first level cache, andif said address of said data transfer into the second level directly addressable memory is cached in the first level cache as valid and dirty, then transferring said data into a corresponding cache line in the first level cache and not into the second level directly addressable memory.6. The ...

Подробнее
25-03-2014 дата публикации

Cache pre-allocation of ways for pipelined allocate requests

Номер: US0008683137B2

This invention is a data processing system with a data cache. The cache controller responds to a cache miss requiring allocation by pre-allocating a way in the set to an allocation request according to said least recently used indication of said ways and then update the least recently used indication of remaining ways of the set. This permits read allocate requests to the same set to proceed without introducing processing stalls due to way contention. This also allows multiple outstanding allocate requests to the same set and way combination. The cache also compares the address of a newly received allocation request to stall this allocation request if the address matches an address of any pending allocation request.

Подробнее
09-11-2021 дата публикации

Prefetch management in a hierarchical cache system

Номер: US0011169924B2

An apparatus includes a CPU core, a first memory cache with a first line size, and a second memory cache having a second line size larger than the first line size. Each line of the second memory cache includes an upper half and a lower half. A memory controller subsystem is coupled to the CPU core and to the first and second memory caches. Upon a miss in the first memory cache for a first target address, the memory controller subsystem determines that the first target address resulting in the miss maps to the lower half of a line in the second memory cache, retrieves the entire line from the second memory cache, and returns the entire line from the second memory cache to the first memory cache.

Подробнее
01-08-2023 дата публикации

Shadow caches for level 2 cache controller

Номер: US0011714754B2

An apparatus including a CPU core and a L1 cache subsystem coupled to the CPU core. The L1 cache subsystem includes a L1 main cache, a L1 victim cache, and a L1 controller. The apparatus includes a L2 cache subsystem coupled to the L1 cache subsystem. The L2 cache subsystem includes a L2 main cache, a shadow L1 main cache, a shadow L1 victim cache, and a L2 controller. The L2 controller receives an indication from the L1 controller that a cache line A is being relocated from the L1 main cache to the L1 victim cache; in response to the indication, update the shadow L1 main cache to reflect that the cache line A is no longer located in the L1 main cache; and in response to the indication, update the shadow L1 victim cache to reflect that the cache line A is located in the L1 victim cache.

Подробнее
19-09-2023 дата публикации

Merging data for write allocate

Номер: US0011762683B2

A method includes receiving, by a level two (L2) controller, a write request for an address that is not allocated as a cache line in a L2 cache. The write request specifies write data. The method also includes generating, by the L2 controller, a read request for the address; reserving, by the L2 controller, an entry in a register file for read data returned in response to the read request; updating, by the L2 controller, a data field of the entry with the write data; updating, by the L2 controller, an enable field of the entry associated with the write data; and receiving, by the L2 controller, the read data and merging the read data into the data field of the entry.

Подробнее
19-07-2022 дата публикации

Aliased mode for cache controller

Номер: US0011392498B2
Принадлежит: TEXAS INSTRUMENTS INCORPORATED

An apparatus includes first CPU and second CPU cores, a L1 cache subsystem coupled to the first CPU core and comprising a L1 controller, and a L2 cache subsystem coupled to the L1 cache subsystem and to the second CPU core. The L2 cache subsystem includes a L2 memory and a L2 controller configured to operate in an aliased mode in response to a value in a memory map control register being asserted. In the aliased mode, the L2 controller receives a first request from the first CPU core directed to a virtual address in the L2 memory, receives a second request from the second CPU core directed to the virtual address in the L2 memory, directs the first request to a physical address A in the L2 memory, and directs the second request to a physical address B in the L2 memory.

Подробнее
19-01-2023 дата публикации

WRITE CONTROL FOR READ-MODIFY-WRITE OPERATIONS IN CACHE MEMORY

Номер: US20230013270A1
Принадлежит: Texas Instruments Inc

In described examples, a processor system includes a processor core that generates memory write requests, and a cache memory with a memory controller having a memory pipeline. The cache memory has cache lines of length L. The cache memory has a minimum write length that is less than a cache line length of the cache memory. The memory pipeline determines whether the data payload includes a first chunk and ECC syndrome that correspond to a partial write and are writable by a first cache write operation, and a second chunk and ECC syndrome that correspond to a full write operation that can be performed separately from the first cache write operation. The memory pipeline performs an RMW operation to store the first chunk and ECC syndrome in the cache memory, and performs the full write operation to store the second chunk and ECC syndrome in the cache memory.

Подробнее
04-06-2024 дата публикации

Multiple-requestor memory access pipeline and arbiter

Номер: US0012001351B2

In described examples, a coherent memory system includes a central processing unit (CPU) and first and second level caches. The memory system can include a pipeline for accessing data stored in one of the caches. Requestors can access the data stored in one of the caches by sending requests at a same time that can be arbitrated by the pipeline.

Подробнее
05-05-2020 дата публикации

Prefetch management in a hierarchical cache system

Номер: US0010642742B2

An apparatus includes a CPU core, a first memory cache with a first line size, and a second memory cache having a second line size larger than the first line size. Each line of the second memory cache includes an upper half and a lower half. A memory controller subsystem is coupled to the CPU core and to the first and second memory caches. Upon a miss in the first memory cache for a first target address, the memory controller subsystem determines that the first target address resulting in the miss maps to the lower half of a line in the second memory cache, retrieves the entire line from the second memory cache, and returns the entire line from the second memory cache to the first memory cache.

Подробнее
02-12-2014 дата публикации

Cache with multiple access pipelines

Номер: US0008904115B2

Parallel pipelines are used to access a shared memory. The shared memory is accessed via a first pipeline by a processor to access cached data from the shared memory. The shared memory is accessed via a second pipeline by a memory access unit to access the shared memory. A first set of tags is maintained for use by the first pipeline to control access to the cache memory, while a second set of tags is maintained for use by the second pipeline to access the shared memory. Arbitrating for access to the cache memory for a transaction request in the first pipeline and for a transaction request in the second pipeline is performed after each pipeline has checked its respective set of tags.

Подробнее
25-06-2024 дата публикации

Handling non-correctable errors

Номер: US0012019514B2

An apparatus includes a central processing unit (CPU) core and a cache subsystem coupled to the CPU core. The cache subsystem includes a first memory, a second memory, and a controller coupled to the first and second memories. The controller is configured to receive a transaction from a master, the transaction directed to the first memory and comprising an address; re-calculate an error correcting code (ECC) for a line of data in the second memory associated with the address; determine that a non-correctable error is present in the line of data in the second memory based on a comparison of the re-calculated ECC and a stored ECC for the line of data; and in response to the determination that a non-correctable error is present in the line of data in the second memory, terminate the transaction without accessing the first memory.

Подробнее
08-02-2022 дата публикации

Cache coherence shared state suppression

Номер: US0011243883B2
Принадлежит: Texas Instruments Incorporated

A method includes receiving, by a level two (L2) controller, a first request for a cache line in a shared cache coherence state; mapping, by the L2 controller, the first request to a second request for a cache line in an exclusive cache coherence state; and responding, by the L2 controller, to the second request.

Подробнее
26-11-2020 дата публикации

WRITE STREAMING IN A PROCESSOR

Номер: US20200371917A1
Принадлежит:

In described examples, a processor system includes a processor core that generates memory write requests, a cache memory, and a memory controller. The memory controller has a memory pipeline. The memory controller is coupled to control the cache memory and communicatively coupled to the processor core. The memory controller is configured to receive the memory write requests from the processor core; schedule the memory write requests on the memory pipeline; and contemporaneously with scheduling respective ones of the memory write requests on the memory pipeline, send to the processor core a write acknowledgment confirming that writing of a data payload of the respective memory write request to the cache memory has completed. 1. A processor system comprising:a processor core configured to generate memory write requests;a cache memory; and receive the memory write requests from the processor core;', 'schedule the memory write requests on the memory pipeline; and', 'contemporaneously with scheduling respective ones of the memory write requests on the memory pipeline, send to the processor core a write acknowledgment confirming that writing of a data payload of the respective memory write request to the cache memory has completed., 'a memory controller having a memory pipeline, the memory controller coupled to control the cache memory and communicatively coupled to the processor core, the memory controller configured to2. The processor system of claim 1 , wherein the memory controller is configured to condition performing the scheduling of the memory write request action and the sending of the write acknowledgment action on satisfying a condition comprising at least: the respective memory write request will not break ordering and coherence and will complete within a finite amount of time if scheduled on the memory pipeline.3. The processor system of claim 2 , wherein the condition is satisfied if the memory pipeline is inherently in-order claim 2 , so that once a memory ...

Подробнее
16-07-2013 дата публикации

Process variability tolerant programmable memory controller for a pipelined memory system

Номер: US0008488405B2

In an embodiment of the invention, an integrated circuit includes a pipelined memory array and a memory control circuit. The pipelined memory array contains a plurality of memory banks. Based partially on the read access time information of a memory bank, the memory control circuit is configured to select the number of clock cycles used during read latency.

Подробнее
07-07-2015 дата публикации

Performance and power improvement on DMA writes to level two combined cache/SRAM that is caused in level one data cache and line is valid and dirty

Номер: US0009075744B2

This invention optimizes DMA writes to directly addressable level two memory that is cached in level one and the line is valid and dirty. When the level two controller detects that a line is valid and dirty in level one, the level two memory need not update its copy of the data. Level one memory will replace the level two copy with a victim writeback at a future time. Thus the level two memory need not store write a copy. This limits the number of DMA writes to level two directly addressable memory and thus improves performance and minimizes dynamic power. This also frees the level two memory for other master/requestors.

Подробнее
02-08-2012 дата публикации

CONFIGURABLE SOURCE BASED/REQUESTOR BASED ERROR DETECTION AND CORRECTION FOR SOFT ERRORS IN MULTI-LEVEL CACHE MEMORY TO MINIMIZE CPU INTERRUPT SERVICE ROUTINES

Номер: US20120198310A1
Принадлежит: TEXAS INSTRUMENTS INCORPORATED

This invention is a memory system with parity generation which selectively forms and stores parity bits of corresponding plural data sources. The parity generation and storage depends upon the state of a global suspend bit and a global enable bit, and parity detection/correction corresponding to each data source. 1. A memory system comprising:a plurality of data sources;a memory for storing data;a parity generator receiving data from said plurality of data sources operable to selectively form a set of parity bits corresponding to data from said plurality of data sources and store said data and corresponding parity bits in said memory.2. The memory system of claim 1 , further comprising:a parity enable data register connected to said parity generator, said parity enable data register having a bit corresponding to each of said plurality of data sources; andwherein said parity generator is operable to form parity bits and store said parity bits and said corresponding data in said memory if said bit of said parity bit enable register corresponding to said data source has a first digital state and store only said data in said memory if said bit of said parity bit enable register corresponding to said data source has a second digital state opposite to said first digital state.3. The memory system of claim 1 , further comprising:an error detection status register having a suspend bit;a parity enable data register connected to said parity generator, said parity enable data register having a bit corresponding to each of said plurality of data sources; andwherein said parity generator is operable to store only said data in said memory if said suspend bit has a first digital state,form parity bits and store said parity bits and said corresponding data in said memory if said bit of said parity bit enable register corresponding to said data source has a third digital state and said suspend bit has a second digital state opposite said first digital state, andstore only said data ...

Подробнее
01-11-2022 дата публикации

Write control for read-modify-write operations in cache memory

Номер: US0011487616B2

In described examples, a processor system includes a processor core that generates memory write requests, and a cache memory with a memory controller having a memory pipeline. The cache memory has cache lines of length L. The cache memory has a minimum write length that is less than a cache line length of the cache memory. The memory pipeline determines whether the data payload includes a first chunk and ECC syndrome that correspond to a partial write and are writable by a first cache write operation, and a second chunk and ECC syndrome that correspond to a full write operation that can be performed separately from the first cache write operation. The memory pipeline performs an RMW operation to store the first chunk and ECC syndrome in the cache memory, and performs the full write operation to store the second chunk and ECC syndrome in the cache memory.

Подробнее
28-01-2016 дата публикации

ZERO CYCLE CLOCK INVALIDATE OPERATION

Номер: US20160026569A1
Принадлежит:

A method to eliminate the delay of a block invalidate operation in a multi CPU environment by overlapping the block invalidate operation with normal CPU accesses, thus making the delay transparent. A range check is performed on each CPU access while a block invalidate operation is in progress, and an access that maps to within the address range of the block invalidate operation will be treated as a cache miss to ensure that the requesting CPU will receive valid data. 12-. (canceled)3. A data processing apparatus comprising:a central processing unit operable to perform data processing operations in response to instructions; comparing an address tag of each cache line to said range of addresses to be invalidated, and', 'if said address tag is within said range of addresses to be invalidated, setting a valid bit for data corresponding to said address tag to indicate an invalid state; and, 'iterate said cache block invalidation command over address tags of all cache lines of said cache including'}, 'a cache connected to said central processing unit temporarily storing in a plurality of cache lines at least one of program instructions for execution by said central processing unit and data for manipulation by said central processing unit corresponding to contents of corresponding addresses in an external memory, each cache line including a tag indicating an external memory address of contents stored therein and at least one valid bit indicating whether contents of said cache line are valid or invalid, said cache operable in response to a cache block invalidation command indicating a range of addresses to be invalidated to compare an address accessed by said cache access to said range of addresses to be invalidated, and', 'if said address accessed by said cache access is within said range of addresses to be invalidated, generate a cache miss in response to said cache access., 'said cache is further operable in response to a cache access during said iteration of a cache ...

Подробнее
12-10-2021 дата публикации

Hardware coherence signaling protocol

Номер: US0011144456B2

An apparatus includes a CPU core and a L1 cache subsystem including a L1 main cache, a L1 victim cache, and a L1 controller. The apparatus includes a L2 cache subsystem including a L2 main cache, a shadow L1 main cache, a shadow L1 victim cache, and a L2 controller configured to receive a read request from the L1 controller as a single transaction. Read request includes a read address, a first indication of an address and a coherence state of a cache line A to be moved from the L1 main cache to the L1 victim cache to allocate space for data returned in response to the read request, and a second indication of an address and a coherence state of a cache line B to be removed from the L1 victim cache in response to the cache line A being moved to the L1 victim cache.

Подробнее
02-12-2014 дата публикации

Robust hamming code implementation for soft error detection, correction, and reporting in a multi-level cache system using dual banking memory scheme

Номер: US0008904260B2

The invention is a memory system having two memory banks which can store and recall with memory error detection and correction on data of two different sizes. For writing separate parity generators form parity bits for respective memory banks. For reading separate parity detector/generators operate on data of separate memory banks.

Подробнее
08-10-2020 дата публикации

PREFETCH MANAGEMENT IN A HIERARCHICAL CACHE SYSTEM

Номер: US20200320006A1
Принадлежит: Texas Instruments Inc

An apparatus includes a CPU core, a first memory cache with a first line size, and a second memory cache having a second line size larger than the first line size. Each line of the second memory cache includes an upper half and a lower half. A memory controller subsystem is coupled to the CPU core and to the first and second memory caches. Upon a miss in the first memory cache for a first target address, the memory controller subsystem determines that the first target address resulting in the miss maps to the lower half of a line in the second memory cache, retrieves the entire line from the second memory cache, and returns the entire line from the second memory cache to the first memory cache.

Подробнее
26-04-2022 дата публикации

Cache size change

Номер: US0011314644B2

A method includes determining, by a level one (L1) controller, to change a size of a L1 main cache; servicing, by the L1 controller, pending read requests and pending write requests from a central processing unit (CPU) core; stalling, by the L1 controller, new read requests and new write requests from the CPU core; writing back and invalidating, by the L1 controller, the L1 main cache. The method also includes receiving, by a level two (L2) controller, an indication that the L1 main cache has been invalidated and, in response, flushing a pipeline of the L2 controller; in response to the pipeline being flushed, stalling, by the L2 controller, requests received from any master; reinitializing, by the L2 controller, a shadow L1 main cache. Reinitializing includes clearing previous contents of the shadow L1 main cache and changing the size of the shadow L1 main cache.

Подробнее
26-03-2024 дата публикации

Memory pipeline control in a hierarchical memory system

Номер: US0011940918B2
Принадлежит: Texas Instruments Incorporated

In described examples, a processor system includes a processor core generating memory transactions, a lower level cache memory with a lower memory controller, and a higher level cache memory with a higher memory controller having a memory pipeline. The higher memory controller is connected to the lower memory controller by a bypass path that skips the memory pipeline. The higher memory controller: determines whether a memory transaction is a bypass write, which is a memory write request indicated not to result in a corresponding write being directed to the higher level cache memory; if the memory transaction is determined a bypass write, determines whether a memory transaction that prevents passing is in the memory pipeline; and if no transaction that prevents passing is determined to be in the memory pipeline, sends the memory transaction to the lower memory controller using the bypass path.

Подробнее
08-08-2023 дата публикации

Multi-level cache security

Номер: US0011720495B2

In described examples, a coherent memory system includes a central processing unit (CPU) and first and second level caches. The CPU is arranged to execute program instructions to manipulate data in at least a first or second secure context. Each of the first and second caches stores a secure code for indicating the at least first or second secure contexts by which data for a respective cache line is received. The first and second level caches maintain coherency in response to comparing the secure codes of respective lines of cache and executing a cache coherency operation in response.

Подробнее
26-04-2022 дата публикации

Prefetch kill and revival in an instruction cache

Номер: US0011314660B2

A system comprises a processor including a CPU core, first and second memory caches, and a memory controller subsystem. The memory controller subsystem speculatively determines a hit or miss condition of a virtual address in the first memory cache and speculatively translates the virtual address to a physical address. Associated with the hit or miss condition and the physical address, the memory controller subsystem configures a status to a valid state. Responsive to receipt of a first indication from the CPU core that no program instructions associated with the virtual address are needed, the memory controller subsystem reconfigures the status to an invalid state and, responsive to receipt of a second indication from the CPU core that a program instruction associated with the virtual address is needed, the memory controller subsystem reconfigures the status back to a valid state.

Подробнее
02-08-2012 дата публикации

Level One Data Cache Line Lock and Enhanced Snoop Protocol During Cache Victims and Writebacks to Maintain Level One Data Cache and Level Two Cache Coherence

Номер: US20120198163A1
Принадлежит: TEXAS INSTRUMENTS INCORPORATED

This invention assures cache coherence in a multi-level cache system upon eviction of a higher level cache line. A victim buffer stored data on evicted lines. On a DMA access that may be cached in the higher level cache the lower level cache sends a snoop write. The address of this snoop write is compared with the victim buffer. On a hit in the victim buffer the write completes in the victim buffer. When the victim data passes to the next cache level it is written into a second victim buffer to be retired when the data is committed to cache. DMA write addresses are compared to addresses in this second victim buffer. On a match the write takes place in the second victim buffer. On a failure to match the controller sends a snoop write. 2. The data processing system of claim 1 , wherein:each cache line of said first level cache including a tag indicating a valid and a dirty status of said data stored therein;said second level memory controller including a set of shadow tags corresponding to said tags of said first level data cache;wherein said second level memory controller further includes a shadow tags comparator connected to said shadow tags and direct memory access write port comparing said address of each shadow tag to said direct memory access write address; andwherein said second level memory controller enables a snoop write to said first level cache controller if said shadow tags comparator detects a match between a direct memory access write address and the address of one of said shadow tags. This application claims priority under 35 U.S.C. 119(e)(1) to U.S. Provisional Application No. 61/387,283 filed Sep. 28, 2010.The technical field of this invention is cache for digital data processors.This invention is applicable to data processing systems with multi-level memory where the second level (L2) memory used for both unified (code and instructions) level two cache and flat (L2 SRAM) memory used to hold critical data and instructions. The second level memory (L2 ...

Подробнее
13-06-2023 дата публикации

Cache coherence shared state suppression

Номер: US0011675700B2

A method includes receiving, by a level two (L2) controller, a first request for a cache line in a shared cache coherence state; mapping, by the L2 controller, the first request to a second request for a cache line in an exclusive cache coherence state; and responding, by the L2 controller, to the second request.

Подробнее
01-01-2015 дата публикации

DYNAMIC MANAGEMENT OF WRITE-MISS BUFFER TO REDUCE WRITE-MISS TRAFFIC

Номер: US20150006820A1
Принадлежит: TEXAS INSTRUMENTS INCORPORATED

Traffic output from a cache write-miss buffer is controlled by determining whether a predetermined condition is satisfied, and outputting an oldest entry from the buffer only in response to a determination that the predetermined condition is satisfied. Posting of a new entry to the buffer is insufficient to satisfy the predetermined condition. 1. A method of controlling output traffic from a buffer that stores write-miss entries associated with one level of a cache for subsequent forwarding to another level of the cache , comprising:determining whether a predetermined condition is satisfied; andoutputting an oldest entry from the buffer only in response to a determination that said predetermined condition is satisfied;wherein posting of a new entry to the buffer is insufficient to satisfy said predetermined condition.2. The method of claim 1 , wherein said predetermined condition is satisfied if said oldest entry has been stored in the buffer for a predetermined number of write cycles.3. The method of claim 2 , wherein said predetermined number of write cycles is dynamically adjustable based on a number of unused locations available in the buffer.4. The method of claim 2 , wherein said predetermined condition is satisfied if an entry in the buffer requires expedited forwarding to said another level of cache.5. The method of claim 4 , wherein said predetermined condition is satisfied if the buffer contains a predetermined threshold number of entries.6. The method of claim 1 , wherein said predetermined condition is satisfied if a predetermined number of write cycles have occurred since an entry was last output from the buffer.7. The method of claim 2 , wherein said predetermined condition is satisfied if the buffer contains a predetermined threshold number of entries.8. The method of claim 1 , wherein said predetermined condition is satisfied if the buffer contains a predetermined threshold number of entries.9. The method of claim 1 , wherein said predetermined ...

Подробнее
01-02-2022 дата публикации

Pipelined read-modify-write operations in cache memory

Номер: US0011237905B2
Принадлежит: Texas Instruments Incorporated

In described examples, a processor system includes a processor core that generates memory write requests, a cache memory, and a memory pipeline of the cache memory. The memory pipeline has a holding buffer, an anchor stage, and an RMW pipeline. The anchor stage determines whether a data payload of a write request corresponds to a partial write. If so, the data payload is written to the holding buffer and conforming data is read from a corresponding cache memory address to merge with the data payload. The RMW pipeline has a merge stage and a syndrome generation stage. The merge stage merges the data payload in the holding buffer with the conforming data to make merged data. The syndrome generation stage generates an ECC syndrome using the merged data. The memory pipeline writes the data payload and ECC syndrome to the cache memory.

Подробнее
12-07-2016 дата публикации

Zero cycle clock invalidate operation

Номер: US0009390011B2

A method to eliminate the delay of a block invalidate operation in a multi CPU environment by overlapping the block invalidate operation with normal CPU accesses, thus making the delay transparent. A range check is performed on each CPU access while a block invalidate operation is in progress, and an access that maps to within the address range of the block invalidate operation is treated as a cache miss to ensure that the requesting CPU will receive valid data.

Подробнее
29-03-2012 дата публикации

Cache with Multiple Access Pipelines

Номер: US20120079204A1
Принадлежит: Texas Instruments Inc

Parallel pipelines are used to access a shared memory. The shared memory is accessed via a first pipeline by a processor to access cached data from the shared memory. The shared memory is accessed via a second pipeline by a memory access unit to access the shared memory. A first set of tags is maintained for use by the first pipeline to control access to the cache memory, while a second set of tags is maintained for use by the second pipeline to access the shared memory. Arbitrating for access to the cache memory for a transaction request in the first pipeline and for a transaction request in the second pipeline is performed after each pipeline has checked its respective set of tags.

Подробнее
24-12-2020 дата публикации

LOOKAHEAD PRIORITY COLLECTION TO SUPPORT PRIORITY ELEVATION

Номер: US20200401532A1
Принадлежит:

A queuing requester for access to a memory system is provided. Transaction requests are received from two or more requestors for access to the memory system. Each transaction request includes an associated priority value. A request queue of the received transaction requests is formed in the queuing requester. Each transaction request includes an associated priority value. A highest priority value of all pending transaction requests within the request queue is determined. An elevated priority value is selected when the highest priority value is higher than the priority value of an oldest transaction request in the request queue; otherwise the priority value of the oldest transaction request is selected. The oldest transaction request in the request queue with the selected priority value is then provided to the memory system. An arbitration contest with other requesters for access to the memory system is performed using the selected priority value. 1. A system-on-a-chip comprising:a set of processing cores;a shared memory; a pipeline that includes registers to store a set of transaction requests and a set of priorities associated with the set of transaction requests, wherein the pipeline is configured to provide a first transaction request of the set of transaction requests to the interconnect; and', when a second priority associated with a second transaction request of the set of transaction requests is greater than a first priority associated with the first transaction request, cause the external priority to be based on the second priority; and', 'when the second priority is not greater than the first priority, cause the external priority to be based on the first priority of the first transaction request., 'merge logic coupled to the pipeline and configured to cause an external priority to be provided to the interconnect along with the first transaction request such that], 'a memory controller coupled to the interconnect that includes, 'an interconnect coupling the ...

Подробнее
10-09-2020 дата публикации

Cache Preload Operations Using Streaming Engine

Номер: US20200285470A1
Принадлежит:

A stream of data is accessed from a memory system using a stream of addresses generated in a first mode of operating a streaming engine in response to executing a first stream instruction. A block cache preload operation is performed on a cache in the memory using a block of addresses generated in a second mode of operating the streaming engine in response to executing a second stream instruction. 1. A method comprising:receiving an instruction that specifies a base address, a data size, and a level of a cache memory to operate on;determining, based on the base address and the data size, a set of addresses associated with the instruction; andissuing a set of cache preload operations to the cache memory that includes a cache preload operation for each address in the set of addresses.2. The method of claim 1 , wherein the cache memory includes a level 2 (L2) cache and a level 3 (L3) cache claim 1 , and the instruction specifies whether to operate on the L2 cache or the L3 cache.3. The method of claim 1 , wherein the cache memory includes a level 1 (L1) cache and a level 2 (L2) cache claim 1 , and the set of cache preload operations are issued to the L2 cache via a data path that does not include the L1 cache.4. The method of claim 1 , wherein the instruction specifies whether to preload the cache memory for read or write.5. The method of further comprising claim 4 , when the instruction specifies to preload the cache memory for read claim 4 , the set of cache preload operations request to preload the cache memory in a shared access mode.6. The method of further comprising claim 4 , when the instruction specifies to preload the cache memory for write claim 4 , the set of cache preload operations request to preload the cache memory in an exclusive access mode.7. The method of claim 1 , wherein:the instruction has an associated privilege level; and for each cache preload operation in the set of cache preload operations, determining whether the privilege level permits ...

Подробнее
02-08-2012 дата публикации

Cache Pre-Allocation of Ways for Pipelined Allocate Requests

Номер: US20120198171A1
Принадлежит: TEXAS INSTRUMENTS INCORPORATED

This invention is a data processing system with a data cache. The cache controller responds to a cache miss requiring allocation by pre-allocating a way in the set to an allocation request according to said least recently used indication of said ways and then update the least recently used indication of remaining ways of the set. This permits read allocate requests to the same set to proceed without introducing processing stalls due to way contention. This also allows multiple outstanding allocate requests to the same set and way combination. The cache also compares the address of a newly received allocation request to stall this allocation request if the address matches an address of any pending allocation request. 1. A data processing system comprising:a central processing unit executing program instructions to manipulate data;a data cache connected to said central processing unit temporarily storing in a plurality of cache lines data for manipulation by said central processing unit, said data cache including a plurality of sets each having a plurality of ways and a least recently used indication for each way of each set;a cache controller including an address to set mapping unit responsive to an address of an allocation request triggered by a cache miss, said address to set mapping unit determining which set can cache data corresponding to said allocation request; pre-allocate a way in said set to an allocation request according to said least recently used indication of said ways,', 'update said least recently used indication of all ways of said set upon said pre-allocation., 'said cache controller operable to'}2. The data processing system of claim 1 , wherein:said cache controller is further operable to permit read allocate requests to the same set processed without introducing processing stalls due to way contention.3. The data processing system of claim 1 , wherein:said cache controller is further operable to allow multiple outstanding allocate requests to ...

Подробнее
14-02-2023 дата публикации

Memory pipeline control in a hierarchical memory system

Номер: US0011580024B2
Принадлежит: Texas Instruments Incorporated

In described examples, a processor system includes a processor core generating memory transactions, a lower level cache memory with a lower memory controller, and a higher level cache memory with a higher memory controller having a memory pipeline. The higher memory controller is connected to the lower memory controller by a bypass path that skips the memory pipeline. The higher memory controller: determines whether a memory transaction is a bypass write, which is a memory write request indicated not to result in a corresponding write being directed to the higher level cache memory; if the memory transaction is determined a bypass write, determines whether a memory transaction that prevents passing is in the memory pipeline; and if no transaction that prevents passing is determined to be in the memory pipeline, sends the memory transaction to the lower memory controller using the bypass path.

Подробнее
07-04-2015 дата публикации

Level one data cache line lock and enhanced snoop protocol during cache victims and writebacks to maintain level one data cache and level two cache coherence

Номер: US0009003122B2

This invention assures cache coherence in a multi-level cache system upon eviction of a higher level cache line. A victim buffer stored data on evicted lines. On a DMA access that may be cached in the higher level cache the lower level cache sends a snoop write. The address of this snoop write is compared with the victim buffer. On a hit in the victim buffer the write completes in the victim buffer. When the victim data passes to the next cache level it is written into a second victim buffer to be retired when the data is committed to cache. DMA write addresses are compared to addresses in this second victim buffer. On a match the write takes place in the second victim buffer. On a failure to match the controller sends a snoop write.

Подробнее
17-11-2015 дата публикации

Programmable address-based write-through cache control

Номер: US0009189331B2

This invention is a cache system with a memory attribute register having plural entries. Each entry stores a write-through or a write-back indication for a corresponding memory address range. On a write to cached data the cache the cache consults the memory attribute register for the corresponding address range. Writes to addresses in regions marked as write-through always update all levels of the memory hierarchy. Writes to addresses in regions marked as write-back update only the first cache level that can service the write. The memory attribute register is preferably a memory mapped control register writable by the central processing unit.

Подробнее
14-07-2020 дата публикации

Lookahead priority collection to support priority elevation

Номер: US0010713180B2

A queuing requester for access to a memory system is provided. Transaction requests are received from two or more requestors for access to the memory system. Each transaction request includes an associated priority value. A request queue of the received transaction requests is formed in the queuing requester. Each transaction request includes an associated priority value. A highest priority value of all pending transaction requests within the request queue is determined. An elevated priority value is selected when the highest priority value is higher than the priority value of an oldest transaction request in the request queue; otherwise the priority value of the oldest transaction request is selected. The oldest transaction request in the request queue with the selected priority value is then provided to the memory system. An arbitration contest with other requesters for access to the memory system is performed using the selected priority value.

Подробнее
15-02-2022 дата публикации

Error correcting codes for multi-master memory controller

Номер: US0011249842B2
Принадлежит: Texas Instruments Incorporated

An apparatus includes a central processing unit (CPU) core and a cache subsystem coupled to the CPU core. The cache subsystem includes a memory configured to store a line of data and an error correcting code (ECC) syndrome associated with the line of data, where the ECC syndrome is calculated based on the line of data and the ECC syndrome is a first type ECC. The cache subsystem also includes a controller configured to, in response to a request from a master configured to implement a second type ECC, the request being directed to the line of data, transform the first type ECC syndrome for the line of data to a second type ECC syndrome send a response to the master. The response includes the line of data and the second type ECC syndrome associated with the line of data.

Подробнее
26-11-2019 дата публикации

Prefetch kill and revival in an instruction cache

Номер: US0010489305B1

A system comprises a processor including a CPU core, first and second memory caches, and a memory controller subsystem. The memory controller subsystem speculatively determines a hit or miss condition of a virtual address in the first memory cache and speculatively translates the virtual address to a physical address. Associated with the hit or miss condition and the physical address, the memory controller subsystem configures a status to a valid state. Responsive to receipt of a first indication from the CPU core that no program instructions associated with the virtual address are needed, the memory controller subsystem reconfigures the status to an invalid state and, responsive to receipt of a second indication from the CPU core that a program instruction associated with the virtual address is needed, the memory controller subsystem reconfigures the status back to a valid state.

Подробнее
15-11-2012 дата публикации

Managing Bandwidth Allocation in a Processing Node Using Distributed Arbitration

Номер: US20120290756A1
Принадлежит: Texas Instruments Inc

Management of access to shared resources within a system comprising a plurality of requesters and a plurality of target resources is provided. A separate arbitration point is associated with each target resource. An access priority value is assigned to each requester. An arbitration contest is performed for access to a first target resource by requests from two or more of the requesters using a first arbitration point associated with the first target resource to determine a winning requester. The request from the winning requester is forwarded to a second target resource. A second arbitration contest is performed for access to the second target resource by the forwarded request from the winning requester and requests from one or more of the plurality of requesters using a second arbitration point associated with the second target resource.

Подробнее
11-10-2012 дата публикации

ENHANCED PIPELINING AND MULTI-BUFFER ARCHITECTURE FOR LEVEL TWO CACHE CONTROLLER TO MINIMIZE HAZARD STALLS AND OPTIMIZE PERFORMANCE

Номер: US20120260031A1
Принадлежит: TEXAS INSTRUMENTS INCORPORATED

This invention is a data processing system including a central processing unit, an external interface, a level one cache, level two memory including level two unified cache and directly addressable memory. A level two memory controller includes a directly addressable memory read pipeline, a central processing unit write pipeline, an external cacheable pipeline and an external non-cacheable pipeline. 1. A data processing system comprising:a central processing unit executing program instructions to manipulate data;an external interface;at least one level one cache connected to said central processing unit temporarily storing at least one of program instructions for execution by said central processing unit and data for manipulation by said central processing unit; a level two unified cache temporarily storing instructions and data for supply of instructions and data to said at least one level one cache, and', 'a directly addressable memory; and, 'a level two memory connected to said at least one level one cache, said level two memory including'} a directly addressable memory read pipeline connected to said at least one level one cache receiving read requests for data stored in said directly addressable memory,', 'a central processing unit write pipeline receiving central processing write requests to addressed stored in said directly addressable memory,', 'an external cacheable pipeline receiving read accesses and write accesses to external memory at cacheable addresses, and', 'an external non-cacheable pipeline receiving read accesses and write accesses to external memory at non-cacheable addresses., 'a level two memory controller connected to said at least one level one cache, said level two memory and said external interface, said level two memory including'}2. The data processing system of claim 1 , wherein:said read requests of said directly addressable memory read pipeline are generated by cache misses in said at least one level one cache.3. The data processing ...

Подробнее
31-08-2021 дата публикации

Shadow caches for level 2 cache controller

Номер: US0011106583B2

An apparatus including a CPU core and a L1 cache subsystem coupled to the CPU core. The L1 cache subsystem includes a L1 main cache, a L1 victim cache, and a L1 controller. The apparatus includes a L2 cache subsystem coupled to the L1 cache subsystem. The L2 cache subsystem includes a L2 main cache, a shadow L1 main cache, a shadow L1 victim cache, and a L2 controller. The L2 controller receives an indication from the L1 controller that a cache line A is being relocated from the L1 main cache to the L1 victim cache; in response to the indication, update the shadow L1 main cache to reflect that the cache line A is no longer located in the L1 main cache; and in response to the indication, update the shadow L1 victim cache to reflect that the cache line A is located in the L1 victim cache.

Подробнее
15-06-2023 дата публикации

MEMORY PIPELINE CONTROL IN A HIERARCHICAL MEMORY SYSTEM

Номер: US20230185719A1
Принадлежит: Texas Instruments Inc

In described examples, a processor system includes a processor core generating memory transactions, a lower level cache memory with a lower memory controller, and a higher level cache memory with a higher memory controller having a memory pipeline. The higher memory controller is connected to the lower memory controller by a bypass path that skips the memory pipeline. The higher memory controller: determines whether a memory transaction is a bypass write, which is a memory write request indicated not to result in a corresponding write being directed to the higher level cache memory; if the memory transaction is determined a bypass write, determines whether a memory transaction that prevents passing is in the memory pipeline; and if no transaction that prevents passing is determined to be in the memory pipeline, sends the memory transaction to the lower memory controller using the bypass path.

Подробнее
26-09-2023 дата публикации

Error correcting codes for multi-master memory controller

Номер: US0011768733B2

An apparatus includes a central processing unit (CPU) core and a cache subsystem coupled to the CPU core. The cache subsystem includes a memory configured to store a line of data and an error correcting code (ECC) syndrome associated with the line of data, where the ECC syndrome is calculated based on the line of data and the ECC syndrome is a first type ECC. The cache subsystem also includes a controller configured to, in response to a request from a master configured to implement a second type ECC, the request being directed to the line of data, transform the first type ECC syndrome for the line of data to a second type ECC syndrome send a response to the master. The response includes the line of data and the second type ECC syndrome associated with the line of data.

Подробнее
26-07-2012 дата публикации

Robust Hamming Code Implementation for Soft Error Detection, Correction, and Reporting in a Multi-Level Cache System Using Dual Banking Memory Scheme

Номер: US20120192027A1
Принадлежит: TEXAS INSTRUMENTS INCORPORATED

The invention is a memory system having two memory banks which can store and recall with memory error detection and correction on data of two different sizes. For writing separate parity generators form parity bits for respective memory banks. For reading separate parity detector/generators operate on data of separate memory banks. 1. A memory system comprising:a first data source having N bits;a first parity generator connected to said first data source generating parity bits corresponding to said N bits;a second data source having 2N bits;a second parity generator connected to an upper half of bits of said second data source generating parity bits corresponding to N upper half bits;a third parity generator connected to a lower half of bits of said second data source generating parity bits corresponding to N lower half bits;a first multiplexer having a first input connected to said first parity generator receiving both said N bits and corresponding parity bits, a second input connected to said second parity generator receiving both said N upper half bits and corresponding parity bits and an output;a second multiplexer having a first input connected to said first parity generator receiving both said N bits and corresponding parity bits, a second input connected to said third parity generator receiving both said N lower half bits and corresponding parity bits and an output; a first memory bank having a write data input connected to said output of said first multiplexer, and', 'a second memory bank having a write data input connected to said output of said second multiplexer; and, 'a memory including'} select said first input as output of said first multiplexer and select no input as output of said second multiplexer thereby storing said N bits and said corresponding parity bits of said first data source in said first memory bank,', 'select no input as output of said first multiplexer and select said first input of said second multiplexer thereby storing said N bits ...

Подробнее
31-03-2020 дата публикации

Cache preload operations using streaming engine

Номер: US0010606596B2

A stream of data is accessed from a memory system using a stream of addresses generated in a first mode of operating a streaming engine in response to executing a first stream instruction. A block cache preload operation is performed on a cache in the memory using a block of addresses generated in a second mode of operating the streaming engine in response to executing a second stream instruction.

Подробнее
04-04-2023 дата публикации

Prefetch kill and revival in an instruction cache

Номер: US0011620236B2

A system comprises a processor including a CPU core, first and second memory caches, and a memory controller subsystem. The memory controller subsystem speculatively determines a hit or miss condition of a virtual address in the first memory cache and speculatively translates the virtual address to a physical address. Associated with the hit or miss condition and the physical address, the memory controller subsystem configures a status to a valid state. Responsive to receipt of a first indication from the CPU core that no program instructions associated with the virtual address are needed, the memory controller subsystem reconfigures the status to an invalid state and, responsive to receipt of a second indication from the CPU core that a program instruction associated with the virtual address is needed, the memory controller subsystem reconfigures the status back to a valid state.

Подробнее
04-02-2016 дата публикации

Programmable Address-Based Write-Through Cache Control

Номер: US20160034396A1
Принадлежит: TEXAS INSTRUMENTS INCORPORATED

This invention is a cache system with a memory attribute register having plural entries. Each entry stores a write-through or a write-back indication for a corresponding memory address range. On a write to cached data the cache the cache consults the memory attribute register for the corresponding address range. Writes to addresses in regions marked as write-through always update all levels of the memory hierarchy. Writes to addresses in regions marked as write-back update only the first cache level that can service the write. The memory attribute register is preferably a memory mapped control register writable by the central processing unit. 16-. (canceled)7. A data processing method comprising the steps of:storing a plurality of memory attributes for a corresponding memory address range in a memory attribute register having plural entries, said memory attributes including a write-through enable (WTE) bit indicating a write-through or a write-back for said corresponding memory address range;temporarily storing in a plurality of cache lines of a first data cache data for manipulation by said central processing unit;temporarily storing in a plurality of cache lines in a second level memory including a second level cache data for manipulation by said central processing unit; writing said data in a corresponding cache line in said first data cache and passing said data on to write base memory if said write-through enable (WTE) bit of said memory attribute register entry for an address of said central processing unit write indicates write-through, and', 'writing said data in a corresponding cache line in said first data cache and not passing said data on to write in base memory if said write-through enable (WTE) bit of said memory attribute register entry for an address of said central processing unit write indicates write-back; and, 'upon a central processing unit write to data cached in said first data cache'} writing said data in a corresponding cache line in said ...

Подробнее
11-07-2013 дата публикации

Asynchronous Clock Dividers to Reduce On-Chip Variations of Clock Timing

Номер: US20130176060A1
Принадлежит: TEXAS INSTRUMENTS INCORPORATED

This invention is a means to definitively establish the occurrence of various clock edges used in a design, balancing clock edges at various locations within an integrated circuit. Clocks entering from outside sources can be a source of on-chip-variations (OCV) resulting in unacceptable clock edge skewing. The present invention arranges placement of the various clock dividers on the chip at remote locations where these clocks are used. This minimizes the uncertainty of the edge occurrence. 1. An integrated circuit comprising:a system clock circuit generating a system clock signal;a plurality of circuit modules disposed on the integrated circuit; and connected to said system clock circuit for receiving said system clock signal,', 'connected to a corresponding one of said plurality of circuit modules and supplying a programmable clock signal to said corresponding circuit module, and', 'disposed proximate to said corresponding circuit module and distant from said system clock circuit., 'a plurality of module clock circuits, each clock generator circuit'}2. The integrated circuit of claim 1 , wherein:at least one of said plurality of module clock circuits includes a programmable divider dividing said system clock signal by a programmable integral amount.3. The integrated circuit of claim 2 , wherein:said programmable integral amount is selected from the set including 2, 3 and 4.4. The integrated circuit of claim 2 , wherein: a plurality of finite state machines, each finite state machine having an input connected to said system clock circuit receiving said system clock signal and an output generating a clock gating signal for a predetermined division of said system clock,', 'a clock gating signal multiplexer having a plurality of inputs, each input connected to a corresponding one of said plurality of finite state machines receiving a corresponding clock gating signal, an output and a control input receiving a clock selection signal, said multiplexer connecting one of ...

Подробнее
03-05-2022 дата публикации

Multiple-requestor memory access pipeline and arbiter

Номер: US0011321248B2
Принадлежит: Texas Instruments Incorporated

In described examples, a coherent memory system includes a central processing unit (CPU) and first and second level caches. The memory system can include a pipeline for accessing data stored in one of the caches. Requestors can access the data stored in one of the caches by sending requests at a same time that can be arbitrated by the pipeline.

Подробнее
02-08-2012 дата публикации

Programmable Address-Based Write-Through Cache Control

Номер: US20120198164A1
Принадлежит: TEXAS INSTRUMENTS INCORPORATED

This invention is a cache system with a memory attribute register having plural entries. Each entry stores a write-through or a write-back indication for a corresponding memory address range. On a write to cached data the cache the cache consults the memory attribute register for the corresponding address range. Writes to addresses in regions marked as write-through always update all levels of the memory hierarchy. Writes to addresses in regions marked as write- back update only the first cache level that can service the write. The memory attribute register is preferably a memory mapped control register writable by the central processing unit. 1. A data processing system comprising:a central processing unit executing program instructions to manipulate data;a memory attribute register having plural entries, each entry storing a write-through or a write-back indication for a corresponding memory address range; writing said data in a corresponding cache line and passing said data on to write base memory if said memory attribute register entry for an address of said central processing unit write indicates write-through, and', 'writing said data in a corresponding cache line and not passing said data on to write in base memory if said memory attribute register entry for an address of said central processing unit write indicates write-back., 'a data cache connected to said central processing unit temporarily storing in a plurality of cache lines data for manipulation by said central processing unit, said data cache upon a central processing unit write to cached data'}2. The data processing system of claim 1 , wherein:said memory attribute register is a memory mapped control register writable by said central processing unit.3. The data processing system of claim 1 , further comprising:a second level cache connected to said data cache temporarily storing in a plurality of cache lines data for manipulation by said central processing unit; and further writing said data in a ...

Подробнее
13-06-2023 дата публикации

Parallelized scrubbing transactions

Номер: US0011675660B2

An apparatus includes a central processing unit (CPU) core and a cache subsystem coupled to the CPU core. The cache subsystem includes a first memory, a second memory, and a controller coupled to the first and second memories. The controller is configured to execute a sequence of scrubbing transactions on the first memory and execute a functional transaction on the second memory. One of the scrubbing transactions and the functional transaction are executed concurrently.

Подробнее
05-04-2022 дата публикации

Global coherence operations

Номер: US0011294707B2

A method includes receiving, by a L2 controller, a request to perform a global operation on a L2 cache and preventing new blocking transactions from entering a pipeline coupled to the L2 cache while permitting new non-blocking transactions to enter the pipeline. Blocking transactions include read transactions and non-victim write transactions. Non-blocking transactions include response transactions, snoop transactions, and victim transactions. The method further includes, in response to an indication that the pipeline does not contain any pending blocking transactions, preventing new snoop transactions from entering the pipeline while permitting new response transactions and victim transactions to enter the pipeline; in response to an indication that the pipeline does not contain any pending snoop transactions, preventing, all new transactions from entering the pipeline; and, in response to an indication that the pipeline does not contain any pending transactions, performing the global ...

Подробнее
07-07-2015 дата публикации

Managing bandwidth allocation in a processing node using distributed arbitration

Номер: US0009075743B2

Management of access to shared resources within a system comprising a plurality of requesters and a plurality of target resources is provided. A separate arbitration point is associated with each target resource. An access priority value is assigned to each requester. An arbitration contest is performed for access to a first target resource by requests from two or more of the requesters using a first arbitration point associated with the first target resource to determine a winning requester. The request from the winning requester is forwarded to a second target resource. A second arbitration contest is performed for access to the second target resource by the forwarded request from the winning requester and requests from one or more of the plurality of requesters using a second arbitration point associated with the second target resource.

Подробнее
02-08-2012 дата публикации

NON-BLOCKING, PIPELINED WRITE ALLOCATES WITH ALLOCATE DATA MERGING IN A MULTI-LEVEL CACHE SYSTEM

Номер: US20120198161A1
Принадлежит: TEXAS INSTRUMENTS INCORPORATED

This invention handles write request cache misses. The cache controller stores write data, sends a read request to external memory for a corresponding cache line, merges the write data with data returned from the external memory and stores merged data in the cache. The cache controller includes buffers with plural entries storing the write address, the write data, the position of the write data within a cache line and unique identification number. This stored data enables the cache controller to proceed to servicing other access requests while waiting for response from the external memory. 1. A data processing system comprising:a central processing unit executing program instructions to manipulate data;a cache connected to said central processing unit temporarily storing in a plurality of cache lines data for manipulation by said central processing unit; and store write data corresponding to said write request in a write data buffer,', 'send a read request to an external memory for a cache line of data encompassing said write request,', 'merge said write data stored in said write buffer with said cache line of data returned from the external memory in response to said read request, and', 'store said merged cache line of data in a corresponding cache line in said cache., 'a cache controller connected to said cache operation unit and said second level cache operable on a write request generating a cache miss to'}2. The data processing system of claim 1 , wherein:said cache controller is further operable to assign a unique identification number to said write request; and a command buffer including a plurality of entries, each entry storing a write address and a corresponding unique identification number, and', 'said write data buffer includes a plurality of entries, each entry storing write data and said corresponding unique identification number., 'wherein said cache controller includes'}3. The data processing system of claim 2 , wherein: 'match said data returned ...

Подробнее
15-11-2012 дата публикации

Lookahead Priority Collection to Support Priority Elevation

Номер: US20120290755A1
Принадлежит:

A queuing requester for access to a memory system. Transaction requests received from two or more requestors access to the memory system. Each transaction request includes an associated priority value. A request queue is formed in the queuing requester. Each transaction request includes an associated priority value. A highest priority value of all pending transaction requests within the request queue is determined. An elevated priority value is selected when the highest priority value is higher than the priority value of an oldest transaction request in the request queue; otherwise the priority value of the oldest transaction request is selected. The oldest transaction request in the request queue with the selected priority value is then provided to the memory system. An arbitration contest with other requesters for access to the memory system uses the selected priority value.

Подробнее
24-01-2013 дата публикации

Process Variability Tolerant Programmable Memory Controller for a Pipelined Memory System

Номер: US20130021858A1
Принадлежит: TEXAS INSTRUMENTS INCORPORATED

In an embodiment of the invention, an integrated circuit includes a pipelined memory array and a memory control circuit. The pipelined memory array contains a plurality of memory banks. Based partially on the read access time information of a memory bank, the memory control circuit is configured to select the number of clock cycles used during read latency. 1. An integrated circuit comprising:a pipelined memory array, the pipelined memory array comprising a plurality of memory banks;a memory control circuit configured to select the number of clock cycles used for a read latency in the pipelined memory array partially based on the read access time information of a memory bank.2. The integrated circuit of wherein the read access time information of the memory bank is provided to the memory control circuit through one or more pins on the integrated circuit.3. The integrated circuit of wherein the read access time information of the memory bank is provided to the memory control circuit through one or more efuse registers.4. The integrated circuit of wherein access latency is determined by the number of pipelined memory banks in the plurality of memory banks.5. The integrated circuit of wherein the memory control circuit controls how much time expires between consecutive accesses of a particular memory bank in the plurality of memory banks.6. The integrated circuit of wherein the time that expires between consecutive accesses of the particular memory bank in the plurality of memory banks is equal to or greater than an access latency.7. The integrated circuit of wherein data may be read from the pipelined memory array every clock cycle when the read access addresses are consecutive.8. The integrated circuit of wherein the memory control circuit comprises:a plurality of delay circuits connected in series wherein each delay circuit in the plurality of delay circuits has an input and an output;a multiplexer having data inputs, select inputs and an output; the data inputs of ...

Подробнее
27-01-2022 дата публикации

Memory pipeline control in a hierarchical memory system

Номер: US20220027275A1
Принадлежит: Texas Instruments Inc

In described examples, a processor system includes a processor core generating memory transactions, a lower level cache memory with a lower memory controller, and a higher level cache memory with a higher memory controller having a memory pipeline. The higher memory controller is connected to the lower memory controller by a bypass path that skips the memory pipeline. The higher memory controller: determines whether a memory transaction is a bypass write, which is a memory write request indicated not to result in a corresponding write being directed to the higher level cache memory; if the memory transaction is determined a bypass write, determines whether a memory transaction that prevents passing is in the memory pipeline; and if no transaction that prevents passing is determined to be in the memory pipeline, sends the memory transaction to the lower memory controller using the bypass path.

Подробнее
17-04-2014 дата публикации

ZERO CYCLE CLOCK INVALIDATE OPERATION

Номер: US20140108737A1
Принадлежит: TEXAS INSTRUMENTS INCORPORATED

A method to eliminate the delay of a block invalidate operation in a multi CPU environment by overlapping the block invalidate operation with normal CPU accesses, thus making the delay transparent. A range check is performed on each CPU access while a block invalidate operation is in progress, and an access that maps to within the address range of the block invalidate operation will be trated as a cache miss to ensure that the requesting CPU will receive valid data. 1. A method of performing a block invalidate operation comprising the steps of:determining whether a CPU memory access maps to be within the address range of a block invalidate operation;forcing a cache miss for memory accesses within said range;marking the cache line being accessed as invalid;issuing a read miss request for the access;setting a valid/invalid bit to invalid in the LRU;using said valid/invalid bit to determine whether the line so marked needs invalidated by the block invalidate operation in progress.2. A cache memory system in a multiple CPU environment wherein:cache coherence is ensured by block invalidate operations when multiple CPUs access data in a cache memory;CPU memory access requests are monitored during a block invalidate operation to determine whether they map to a location within the address range of the block invalidate operation and accesses within the range are treated as a cache miss;a read miss request is issued for the cache miss thus ensuring that the CPU issuing the memory access request will receive valid data from main memory. The technical field of this invention is Cache memories for digital data processors.In a hierarchical cache system a block invalidate operation may be needed to invalidate a block of lines cached in the memory system. I the block coherence operation the user programs the base address and the number of words that need to be removed from the cache. The cache controller then iterates through the entire cache memory, and if it finds an address that ...

Подробнее
03-03-2022 дата публикации

Hardware coherence signaling protocol

Номер: US20220066937A1
Принадлежит: Texas Instruments Inc

An apparatus includes a CPU core and a L1 cache subsystem including a L1 main cache, a L1 victim cache, and a L1 controller. The apparatus includes a L2 cache subsystem including a L2 main cache, a shadow L1 main cache, a shadow L1 victim cache, and a L2 controller configured to receive a read request from the L1 controller as a single transaction. Read request includes a read address, a first indication of an address and a coherence state of a cache line A to be moved from the L1 main cache to the L1 victim cache to allocate space for data returned in response to the read request, and a second indication of an address and a coherence state of a cache line B to be removed from the L1 victim cache in response to the cache line A being moved to the L1 victim cache.

Подробнее
28-03-2019 дата публикации

Cache Management Operations Using Streaming Engine

Номер: US20190095204A1
Принадлежит: Texas Instruments Inc

A stream of data is accessed from a memory system using a stream of addresses generated in a first mode of operating a streaming engine in response to executing a first stream instruction. A block cache management operation is performed on a cache in the memory using a block of addresses generated in a second mode of operating the streaming engine in response to executing a second stream instruction.

Подробнее
28-03-2019 дата публикации

Cache Preload Operations Using Streaming Engine

Номер: US20190095205A1
Принадлежит:

A stream of data is accessed from a memory system using a stream of addresses generated in a first mode of operating a streaming engine in response to executing a first stream instruction. A block cache preload operation is performed on a cache in the memory using a block of addresses generated in a second mode of operating the streaming engine in response to executing a second stream instruction. 1. A method of managing a cache in a computer system , the method comprising:executing instructions including a first stream instruction and a second stream instruction on a processor within the computer system;fetching a stream of data elements from a memory coupled to the processor using a stream of addresses generated in a first mode of operating a streaming engine in response to executing the first stream instruction; andperforming a block cache preload operation on the cache using a block of addresses generated in a second mode of operating the streaming engine in response to executing the second stream instruction.2. The method of claim 1 , wherein the second stream instruction includes an opcode to define a type of block preload operation and specifies an operand to define a beginning address of the block of addresses.3. The method of claim 2 , wherein the cache of the computer system has at least two hierarchical levels of memory; andwherein performing the block preload operation operates on a particular hierarchical level of memory of the cache specified by the opcode.4. The method of claim 2 , further comprising:asserting a status signal by the streaming engine to indicate a block preload operation is being performed and de-asserting the status signal to indicate the block preload operation is complete; andexecuting an instruction by the processor that causes the processor to wait until the block preload operation is complete.5. The method of claim 1 , wherein the block of addresses used for performing the block preload operation is a block of virtual addresses ...

Подробнее
25-06-2015 дата публикации

Level One Data Cache Line Lock and Enhanced Snoop Protocol During Cache Victims and Writebacks to Maintain Level One Data Cache and Level Two Cache Coherence

Номер: US20150178221A1
Принадлежит:

This invention assures cache coherence in a multi-level cache system upon eviction of a higher level cache line. A victim buffer stored data on evicted lines. On a DMA access that may be cached in the higher level cache the lower level cache sends a snoop write. The address of this snoop write is compared with the victim buffer. On a hit in the victim buffer the write completes in the victim buffer. When the victim data passes to the next cache level it is written into a second victim buffer to be retired when the data is committed to cache. DMA write addresses are compared to addresses in this second victim buffer. On a match the write takes place in the second victim buffer. On a failure to match the controller sends a snoop write. 12-. (canceled)3. In a data processing system including a central processing unit executing program instructions to manipulate data , a first level data cache temporarily storing in a plurality of first cache lines data for manipulation by the central processing unit , a second level memory including second level cache temporarily storing in a plurality of second cache lines data for manipulation by the central processing unit and a second level local memory directly addressable by the central processing unit and a direct memory access unit operating under control of the central processing unit to control data transfers including transferring data into and out of the second level local memory , a method of data processing operation comprising the steps of:initiating a victim entry in a first victim buffer upon selection of a first cache line in the first level data cache for replacement,storing a first victim address corresponding to data stored in a first cache line selected for replacement and data stored in said first cache line selected for replacement in the victim entry of the first victim buffer;receiving snoop write data and a corresponding snoop write address;comparing the first victim address of each entry in the first victim ...

Подробнее
10-09-2020 дата публикации

Cache Management Operations Using Streaming Engine

Номер: US20200285469A1
Принадлежит:

A stream of data is accessed from a memory system using a stream of addresses generated in a first mode of operating a streaming engine in response to executing a first stream instruction. A block cache management operation is performed on a cache in the memory using a block of addresses generated in a second mode of operating the streaming engine in response to executing a second stream instruction. 1. A method comprising:receiving a cache management instruction that specifies a base address, a data size, and a level of a cache memory to operate on;determining, based on the base address and the data size, a set of addresses associated with the cache management instruction; andissuing a set of cache management operations to the cache memory that includes a cache management operation for each address in the set of addresses.2. The method of claim 1 , wherein the cache memory includes a plurality of levels that includes a first level representing a point of unification and a second level representing a point of coherence claim 1 , and wherein the first level and the second level are different.3. The method of claim 2 , wherein the cache management instruction specifies the level of the cache memory to operate on by specifying whether to operate on the point of unification or the point of coherence.4. The method of claim 3 , wherein the first level is a level 2 (L2) cache level and the second level is a level 3 (L3) cache level.5. The method of claim 1 , wherein the cache management instruction further specifies whether the cache management operation is a clean operation claim 1 , an invalidate operation claim 1 , or a clean-and-invalidate operation.6. The method of claim 1 , wherein the cache memory includes a level 2 (L2) cache and the issuing of the set of cache management operations issues the set of cache management operations directly to the L2 cache.7. The method of further comprising providing a control signal to the cache memory to indicate that the set of ...

Подробнее
26-11-2020 дата публикации

Pipeline arbitration

Номер: US20200371834A1
Принадлежит: Texas Instruments Inc

A method includes receiving, by a first stage in a pipeline, a first transaction from a previous stage in pipeline; in response to first transaction comprising a high priority transaction, processing high priority transaction by sending high priority transaction to a buffer; receiving a second transaction from previous stage; in response to second transaction comprising a low priority transaction, processing low priority transaction by monitoring a full signal from buffer while sending low priority transaction to buffer; in response to full signal asserted and no high priority transaction being available from previous stage, pausing processing of low priority transaction; in response to full signal asserted and a high priority transaction being available from previous stage, stopping processing of low priority transaction and processing high priority transaction; and in response to full signal being de-asserted, processing low priority transaction by sending low priority transaction to buffer.

Подробнее
26-11-2020 дата публикации

PARALLELIZED SCRUBBING TRANSACTIONS

Номер: US20200371862A1
Принадлежит:

An apparatus includes a central processing unit (CPU) core and a cache subsystem coupled to the CPU core. The cache subsystem includes a first memory, a second memory, and a controller coupled to the first and second memories. The controller is configured to execute a sequence of scrubbing transactions on the first memory and execute a functional transaction on the second memory. One of the scrubbing transactions and the functional transaction are executed concurrently. 1. An apparatus , comprising:a central processing unit (CPU) core; and a first memory;', 'a second memory;', execute a sequence of scrubbing transactions on the first memory; and', 'execute a functional transaction on the second memory;', 'wherein one of the scrubbing transactions and the functional transaction are executed concurrently., 'a controller coupled to the first and second memories, the controller configured to], 'a cache subsystem coupled to the CPU core, the cache subsystem comprising2. The apparatus of claim 1 , wherein:the sequence of scrubbing transactions comprises a first sequence;the functional transaction comprises a first functional transaction; and execute a second sequence of scrubbing transactions on the second memory; and', 'execute a second functional transaction on the first memory;', 'wherein one of the second sequence of scrubbing transactions and the second functional transaction are executed concurrently., 'at a time after the first sequence of scrubbing transactions is executed on the first memory, the controller is further configured to3. The apparatus of claim 1 , wherein the cache subsystem further comprises:a first pipeline coupled to the first memory and to the controller, the first pipeline comprising a first scrubber state machine;a second pipeline coupled to the second memory and to the controller, the second pipeline comprising a second scrubber state machine; anda scrubber control register comprising an enable field, a burst delay field, and a cycle delay ...

Подробнее
26-11-2020 дата публикации

ERROR CORRECTING CODES FOR MULTI-MASTER MEMORY CONTROLLER

Номер: US20200371874A1
Принадлежит:

An apparatus includes a central processing unit (CPU) core and a cache subsystem coupled to the CPU core. The cache subsystem includes a memory configured to store a line of data and an error correcting code (ECC) syndrome associated with the line of data, where the ECC syndrome is calculated based on the line of data and the ECC syndrome is a first type ECC. The cache subsystem also includes a controller configured to, in response to a request from a master configured to implement a second type ECC, the request being directed to the line of data, transform the first type ECC syndrome for the line of data to a second type ECC syndrome send a response to the master. The response includes the line of data and the second type ECC syndrome associated with the line of data. 1. An apparatus , comprising:a central processing unit (CPU) core; and a memory configured to store a line of data and an error correcting code (ECC) syndrome associated with the line of data, wherein the ECC syndrome is calculated based on the line of data, wherein the ECC syndrome is a first type ECC; and', transform the first type ECC syndrome for the line of data to a second type ECC syndrome; and', 'send a response to the master, the response comprising the line of data and the second type ECC syndrome associated with the line of data., 'a controller configured to, in response to a request from a master configured to implement a second type ECC, the request being directed to the line of data], 'a cache subsystem coupled to the CPU core, the cache subsystem comprising2. The apparatus of claim 1 , wherein when the controller transforms the first type ECC syndrome for the line of data to the second type ECC claim 1 , the controller is further configured to:re-calculate a first type ECC syndrome for the line of data;determine that a single error is present in the line of data based on a comparison of the re-calculated first type ECC syndrome and the first type ECC syndrome associated with the line of ...

Подробнее
26-11-2020 дата публикации

HANDLING NON-CORRECTABLE ERRORS

Номер: US20200371875A1
Принадлежит:

An apparatus includes a central processing unit (CPU) core and a cache subsystem coupled to the CPU core. The cache subsystem includes a first memory, a second memory, and a controller coupled to the first and second memories. The controller is configured to receive a transaction from a master, the transaction directed to the first memory and comprising an address; re-calculate an error correcting code (ECC) for a line of data in the second memory associated with the address; determine that a non-correctable error is present in the line of data in the second memory based on a comparison of the re-calculated ECC and a stored ECC for the line of data; and in response to the determination that a non-correctable error is present in the line of data in the second memory, terminate the transaction without accessing the first memory. 1. An apparatus , comprising:a central processing unit (CPU) core; and a first memory;', 'a second memory; and', receive a transaction from a master, the transaction directed to the first memory and comprising an address;', 're-calculate an error correcting code (ECC) for a line of data in the second memory associated with the address;', 'determine that a non-correctable error is present in the line of data in the second memory based on a comparison of the re-calculated ECC and a stored ECC for the line of data; and', 'in response to the determination that a non-correctable error is present in the line of data in the second memory, terminate the transaction without accessing the first memory., 'a controller coupled to the first and second memories, the controller configured to], 'a cache subsystem coupled to the CPU core, the cache subsystem comprising2. The apparatus of claim 1 , wherein the controller is further configured to claim 1 , in response to the determination that a non-correctable error is present in the line of data claim 1 , generate a response to the master that indicates the presence of a non-correctable error associated with ...

Подробнее
26-11-2020 дата публикации

WRITE CONTROL FOR READ-MODIFY-WRITE OPERATIONS IN CACHE MEMORY

Номер: US20200371918A1
Принадлежит:

In described examples, a processor system includes a processor core that generates memory write requests, and a cache memory with a memory controller having a memory pipeline. The cache memory has cache lines of length L. The cache memory has a minimum write length that is less than a cache line length of the cache memory. The memory pipeline determines whether the data payload includes a first chunk and ECC syndrome that correspond to a partial write and are writable by a first cache write operation, and a second chunk and ECC syndrome that correspond to a full write operation that can be performed separately from the first cache write operation. The memory pipeline performs an RMW operation to store the first chunk and ECC syndrome in the cache memory, and performs the full write operation to store the second chunk and ECC syndrome in the cache memory. 1. A processor system comprising:a processor core configured to generate memory write requests;a cache memory having multiple cache lines; determine whether the data payload includes a first chunk and a first error correction code (ECC) syndrome that correspond to a partial write and are writable by a first cache write operation, and includes a second chunk and a second ECC syndrome that correspond to a full write operation that can be performed as a second cache write operation that is separate from the first cache write operation;', 'perform a read-modify-write (RMW) operation as the first cache write operation to store the first chunk and an ECC syndrome generated by the RMW operation in the cache memory; and', 'perform the full write operation as the second cache write operation to store the second chunk and the second ECC syndrome in the cache memory., 'a memory controller of the cache memory, the memory controller configured to write to the cache memory with a minimum write length that is less than a cache line length of the cache memory, the memory controller configured to receive a write request having a ...

Подробнее
26-11-2020 дата публикации

Cache size change

Номер: US20200371919A1
Принадлежит: Texas Instruments Inc

A method includes determining, by a level one (L1) controller, to change a size of a L1 main cache; servicing, by the L1 controller, pending read requests and pending write requests from a central processing unit (CPU) core; stalling, by the L1 controller, new read requests and new write requests from the CPU core; writing back and invalidating, by the L1 controller, the L1 main cache. The method also includes receiving, by a level two (L2) controller, an indication that the L1 main cache has been invalidated and, in response, flushing a pipeline of the L2 controller; in response to the pipeline being flushed, stalling, by the L2 controller, requests received from any master; reinitializing, by the L2 controller, a shadow L1 main cache. Reinitializing includes clearing previous contents of the shadow L1 main cache and changing the size of the shadow L1 main cache.

Подробнее
26-11-2020 дата публикации

Shadow caches for level 2 cache controller

Номер: US20200371920A1
Принадлежит: Texas Instruments Inc

An apparatus including a CPU core and a L1 cache subsystem coupled to the CPU core. The L1 cache subsystem includes a L1 main cache, a L1 victim cache, and a L1 controller. The apparatus includes a L2 cache subsystem coupled to the L1 cache subsystem. The L2 cache subsystem includes a L2 main cache, a shadow L1 main cache, a shadow L1 victim cache, and a L2 controller. The L2 controller receives an indication from the L1 controller that a cache line A is being relocated from the L1 main cache to the L1 victim cache; in response to the indication, update the shadow L1 main cache to reflect that the cache line A is no longer located in the L1 main cache; and in response to the indication, update the shadow L1 victim cache to reflect that the cache line A is located in the L1 victim cache.

Подробнее
26-11-2020 дата публикации

Tag update bus for updated coherence state

Номер: US20200371923A1
Принадлежит: Texas Instruments Inc

An apparatus includes a CPU core and a L1 cache subsystem including a L1 main cache, a L1 victim cache, and a L1 controller. The apparatus includes a L2 cache subsystem coupled to the L1 cache subsystem by a transaction bus and a tag update bus. The L2 cache subsystem includes a L2 main cache, a shadow L1 main cache, a shadow L1 victim cache, and a L2 controller. The L2 controller receives a message from the L1 controller over the tag update bus, including a valid signal, an address, and a coherence state. In response to the valid signal being asserted, the L2 controller identifies an entry in the shadow L1 main cache or the shadow L1 victim cache having an address corresponding to the address of the message and updates a coherence state of the identified entry to be the coherence state of the message.

Подробнее
26-11-2020 дата публикации

CONTROLLER WITH CACHING AND NON-CACHING MODES

Номер: US20200371924A1
Принадлежит:

An apparatus includes a CPU core, a first cache subsystem coupled to the CPU core, and a second memory coupled to the cache subsystem. The first cache subsystem includes a configuration register, a first memory, and a controller. The controller is configured to: receive a request directed to an address in the second memory and, in response to the configuration register having a first value, operate in a non-caching mode. In the non-caching mode, the controller is configured to provide the request to the second memory without caching data returned by the request in the first memory. In response to the configuration register having a second value, the controller is configured to operate in a caching mode. In the caching mode the controller is configured to provide the request to the second memory and cache data returned by the request in the first memory. 1. An apparatus , comprising:a central processing unit (CPU) core; a configuration register;', 'a first memory; and', 'a controller; and, 'a first cache subsystem coupled to the CPU core, the first cache subsystem comprisinga second memory coupled to the first cache subsystem; receive a request directed to an address in the second memory;', 'in response to the configuration register having a first value, operate in a non-caching mode, wherein in the non-caching mode the controller is configured to provide the request to the second memory without caching data returned by the request in the first memory; and', 'in response to the configuration register having a second value, operate in a caching mode, wherein in the caching mode the controller is configured to provide the request to the second memory and cache data returned by the request in the first memory., 'wherein the controller is configured to2. The apparatus of claim 1 , wherein the CPU core is configured to set the value of the configuration register to cause the controller to operate in the caching mode or the non-caching mode claim 1 , wherein the CPU core ...

Подробнее
26-11-2020 дата публикации

MERGING DATA FOR WRITE ALLOCATE

Номер: US20200371925A1
Принадлежит:

A method includes receiving, by a level two (L2) controller, a write request for an address that is not allocated as a cache line in a L2 cache. The write request specifies write data. The method also includes generating, by the L2 controller, a read request for the address; reserving, by the L2 controller, an entry in a register file for read data returned in response to the read request; updating, by the L2 controller, a data field of the entry with the write data; updating, by the L2 controller, an enable field of the entry associated with the write data; and receiving, by the L2 controller, the read data and merging the read data into the data field of the entry. 1. A method , comprising:receiving, by a level two (L2) controller, a write request for an address that is not allocated as a cache line in a L2 cache, the write request specifying write data;generating, by the L2 controller, a read request for the address;reserving, by the L2 controller, an entry in a register file for read data returned in response to the read request;updating, by the L2 controller, a data field of the entry with the write data;updating, by the L2 controller, an enable field of the entry associated with the write data; andreceiving, by the L2 controller, the read data and merging the read data into the data field of the entry.2. The method of claim 1 , further comprising storing the merged data field of the entry as a cache line in the L2 cache.3. The method of claim 1 , wherein the data field of the entry comprises a plurality of bytes and the enable field of the entry comprises a plurality of sub-fields claim 1 , each sub-field associated with one of the plurality of bytes.4. The method of claim 3 , wherein updating the enable field further comprises:asserting each sub-field associated with a byte containing write data; andde-asserting each sub-field associated with a byte not containing write data.5. The method of claim 4 , wherein merging the read data into the data field of the ...

Подробнее
26-11-2020 дата публикации

GLOBAL COHERENCE OPERATIONS

Номер: US20200371926A1
Принадлежит:

A method includes receiving, by a L2 controller, a request to perform a global operation on a L2 cache and preventing new blocking transactions from entering a pipeline coupled to the L2 cache while permitting new non-blocking transactions to enter the pipeline. Blocking transactions include read transactions and non-victim write transactions. Non-blocking transactions include response transactions, snoop transactions, and victim transactions. The method further includes, in response to an indication that the pipeline does not contain any pending blocking transactions, preventing new snoop transactions from entering the pipeline while permitting new response transactions and victim transactions to enter the pipeline; in response to an indication that the pipeline does not contain any pending snoop transactions, preventing, all new transactions from entering the pipeline; and, in response to an indication that the pipeline does not contain any pending transactions, performing the global operation on the L2 cache. 1. A method , comprising:receiving, by a level two (L2) controller, a request to perform a global operation on a L2 cache;preventing, by the L2 controller, new blocking transactions from entering a pipeline coupled to the L2 cache while permitting new non-blocking transactions to enter the pipeline;wherein blocking transactions comprise read transactions and non-victim write transactions;wherein non-blocking transactions comprise response transactions, snoop transactions, and victim transactions; in response to an indication that the pipeline does not contain any pending blocking transactions, preventing, by the L2 controller, new snoop transactions from entering the pipeline while permitting new response transactions and victim transactions to enter the pipeline;', 'in response to an indication that the pipeline does not contain any pending snoop transactions, preventing, by the L2 controller, all new transactions from entering the pipeline; and', 'in ...

Подробнее
26-11-2020 дата публикации

MULTI-LEVEL CACHE SECURITY

Номер: US20200371927A1
Принадлежит:

In described examples, a coherent memory system includes a central processing unit (CPU) and first and second level caches. The CPU is arranged to execute program instructions to manipulate data in at least a first or second secure context. Each of the first and second caches stores a secure code for indicating the at least first or second secure contexts by which data for a respective cache line is received. The first and second level caches maintain coherency in response to comparing the secure codes of respective lines of cache and executing a cache coherency operation in response. 1. A system , comprising:a central processing unit (CPU) arranged to execute program instructions to manipulate data in at least a first or second secure context, wherein the first and second secure contexts include differing security components therebetween;a first level cache coupled to the CPU to temporarily store data in cache lines for manipulation by the CPU, wherein the first level cache includes a first secure code memory for storing a first-level-cache secure code list of secure codes, wherein each secure code indicates one of the at least first or second secure contexts by which data for a respective cache line is received, and wherein the first level cache includes a first level local memory addressable by the CPU; anda second level cache coupled to the first level cache to temporarily store data in cache lines for manipulation by the CPU, wherein the second level cache includes a second secure code memory for storing a second-level-cache secure code list of secure codes, wherein each secure code indicates one of the at least first or second secure contexts by which data for a respective cache line is received, and wherein the second level cache includes a second level local memory addressable by the CPU and the first level cache.2. The system of claim 1 , wherein the second level cache includes a shadow copy of the first-level-cache secure code list of secure codes.3. The ...

Подробнее
26-11-2020 дата публикации

Hardware coherence for memory controller

Номер: US20200371930A1
Принадлежит: Texas Instruments Inc

A system includes a non-coherent component; a coherent, non-caching component; a coherent, caching component; and a level two (L2) cache subsystem coupled to the non-coherent component, the coherent, non-caching component, and the coherent, caching component. The L2 cache subsystem includes a L2 cache; a shadow level one (L1) main cache; a shadow L1 victim cache; and a L2 controller. The L2 controller is configured to receive and process a first transaction from the non-coherent component; receive and process a second transaction from the coherent, non-caching component; and receive and process a third transaction from the coherent, caching component.

Подробнее
26-11-2020 дата публикации

Cache coherence shared state suppression

Номер: US20200371931A1
Принадлежит: Texas Instruments Inc

A method includes receiving, by a level two (L2) controller, a first request for a cache line in a shared cache coherence state; mapping, by the L2 controller, the first request to a second request for a cache line in an exclusive cache coherence state; and responding, by the L2 controller, to the second request.

Подробнее
26-11-2020 дата публикации

HARDWARE COHERENCE SIGNALING PROTOCOL

Номер: US20200371934A1
Принадлежит:

An apparatus includes a CPU core and a L1 cache subsystem including a L1 main cache, a L1 victim cache, and a L1 controller. The apparatus includes a L2 cache subsystem including a L2 main cache, a shadow L1 main cache, a shadow L1 victim cache, and a L2 controller configured to receive a read request from the L1 controller as a single transaction. Read request includes a read address, a first indication of an address and a coherence state of a cache line A to be moved from the L1 main cache to the L1 victim cache to allocate space for data returned in response to the read request, and a second indication of an address and a coherence state of a cache line B to be removed from the L1 victim cache in response to the cache line A being moved to the L1 victim cache. 1. An apparatus , comprising:a central processing unit (CPU) core; a L1 main cache;', 'a L1 victim cache; and', 'a L1 controller;, 'a level one (L1) cache subsystem coupled to the CPU core, the L1 cache subsystem comprising a L2 main cache;', 'a shadow L1 main cache;', 'a shadow L1 victim cache; and', a read address;', 'a first indication of an address and a coherence state of a cache line A to be moved from the L1 main cache to the L1 victim cache to allocate space for data returned in response to the read request; and', 'a second indication of an address and a coherence state of a cache line B to be removed from the L1 victim cache in response to the cache line A being moved to the L1 victim cache., 'a L2 controller configured to receive a read request from the L1 controller as a single transaction, the read request comprising], 'a level two (L2) cache subsystem coupled to the L1 cache subsystem, the L2 cache subsystem comprising2. The apparatus of claim 1 , wherein the L2 controller is further configured to claim 1 , in response to the first indication:update the shadow L1 main cache to reflect that the cache line A is no longer located in the L1 main cache; andupdate the shadow L1 victim cache to ...

Подробнее
26-11-2020 дата публикации

PSEUDO-RANDOM WAY SELECTION

Номер: US20200371935A1
Принадлежит:

A method includes receiving a first request to allocate a line in an N-way set associative cache and, in response to a cache coherence state of a way indicating that a cache line stored in the way is invalid, allocating the way for the first request. The method also includes, in response to no ways in the set having a cache coherence state indicating that the cache line stored in the way is invalid, randomly selecting one of the ways in the set. The method also includes, in response to a cache coherence state of the selected way indicating that another request is not pending for the selected way, allocating the selected way for the first request. 1. A method , comprising:receiving a first request to allocate a line in an N-way set associative cache;in response to a cache coherence state of a way indicating that a cache line stored in the way is invalid, allocating the way for the first request; randomly selecting one of the ways in the set; and', 'in response to a cache coherence state of the selected way indicating that another request is not pending for the selected way, allocating the selected way for the first request., 'in response to no ways in the set having a cache coherence state indicating that the cache line stored in the way is invalid2. The method of claim 1 , further comprising claim 1 , in response to the cache coherence state of the selected way indicating that another request is pending for the selected way claim 1 , servicing the first request without allocating a line in the cache.3. The method of claim 2 , wherein servicing the first request without allocating a line in the cache further comprises converting the first request to a non-allocating request and sending the non-allocating request to a memory endpoint identified by the first request.4. The method of claim 1 , further comprising claim 1 , in response to the cache coherence state of the selected way indicating that another request is pending for the selected way claim 1 , randomly ...

Подробнее
26-11-2020 дата публикации

MULTIPLE-REQUESTOR MEMORY ACCESS PIPELINE AND ARBITER

Номер: US20200371970A1
Принадлежит:

In described examples, a coherent memory system includes a central processing unit (CPU) and first and second level caches. The memory system can include a pipeline for accessing data stored in one of the caches. Requestors can access the data stored in one of the caches by sending requests at a same time that can be arbitrated by the pipeline. 1. A system , comprising: a local memory that includes a set of cache lines to store data; and', a first banked pipe configured to receive first banked transaction requests from a first requestor for access to the local memory; and', 'a second banked pipe configured to receive second banked transaction requests from a second requestor for access to the local memory, wherein the second requestor is different from the first requestor, and wherein the first banked transaction requests are processed by the first banked pipe at a same time as the second banked transaction requests are processed by the second banked pipe., 'a multi-banked pipeline coupled to access the set of cache lines of the local memory, wherein the multi-banked pipeline includes], 'a cache that includes2. The system of claim 1 , wherein the first banked pipe includes a first banked blocking arbiter that is arranged to temporarily block and reorder the first banked transaction requests in the first banked pipe claim 1 , and wherein the second banked pipe includes a second banked blocking arbiter that is arranged to temporarily block and reorder the second banked transaction requests in the second banked pipe.3. The system of claim 2 , wherein the first banked blocking arbiter is arranged to award priority to non-blocking transactions claim 2 , so that in response to the awarded priority claim 2 , a blocking first transaction is held at a blocking stage of the first banked pipe and a first non-blocking transaction is passed to a stage following the blocking stage without being held at the blocking stage.4. The system of claim 2 , wherein the first banked ...

Подробнее
03-08-2023 дата публикации

Lookahead priority collection to support priority elevation

Номер: US20230244611A1
Принадлежит: Texas Instruments Inc

A queuing requester for access to a memory system is provided. Transaction requests are received from two or more requestors for access to the memory system. Each transaction request includes an associated priority value. A request queue of the received transaction requests is formed in the queuing requester. Each transaction request includes an associated priority value. A highest priority value of all pending transaction requests within the request queue is determined. An elevated priority value is selected when the highest priority value is higher than the priority value of an oldest transaction request in the request queue; otherwise the priority value of the oldest transaction request is selected. The oldest transaction request in the request queue with the selected priority value is then provided to the memory system. An arbitration contest with other requesters for access to the memory system is performed using the selected priority value.

Подробнее
04-01-2024 дата публикации

Shadow caches for level 2 cache controller

Номер: US20240004793A1
Принадлежит: Texas Instruments Inc

An apparatus including a CPU core and a L1 cache subsystem coupled to the CPU core. The L1 cache subsystem includes a L1 main cache, a L1 victim cache, and a L1 controller. The apparatus includes a L2 cache subsystem coupled to the L1 cache subsystem. The L2 cache subsystem includes a L2 main cache, a shadow L1 main cache, a shadow L1 victim cache, and a L2 controller. The L2 controller receives an indication from the L1 controller that a cache line A is being relocated from the L1 main cache to the L1 victim cache; in response to the indication, update the shadow L1 main cache to reflect that the cache line A is no longer located in the L1 main cache; and in response to the indication, update the shadow L1 victim cache to reflect that the cache line A is located in the L1 victim cache.

Подробнее
20-02-2024 дата публикации

Controller with caching and non-caching modes

Номер: US11907753B2
Принадлежит: Texas Instruments Inc

An apparatus includes a CPU core, a first cache subsystem coupled to the CPU core, and a second memory coupled to the cache subsystem. The first cache subsystem includes a configuration register, a first memory, and a controller. The controller is configured to: receive a request directed to an address in the second memory and, in response to the configuration register having a first value, operate in a non-caching mode. In the non-caching mode, the controller is configured to provide the request to the second memory without caching data returned by the request in the first memory. In response to the configuration register having a second value, the controller is configured to operate in a caching mode. In the caching mode the controller is configured to provide the request to the second memory and cache data returned by the request in the first memory.

Подробнее
04-01-2024 дата публикации

Merging data for write allocate

Номер: US20240004694A1
Принадлежит: Texas Instruments Inc

A method includes receiving, by a level two (L2) controller, a write request for an address that is not allocated as a cache line in a L2 cache. The write request specifies write data. The method also includes generating, by the L2 controller, a read request for the address; reserving, by the L2 controller, an entry in a register file for read data returned in response to the read request; updating, by the L2 controller, a data field of the entry with the write data; updating, by the L2 controller, an enable field of the entry associated with the write data; and receiving, by the L2 controller, the read data and merging the read data into the data field of the entry.

Подробнее
21-12-2023 дата публикации

Global coherence operations

Номер: US20230409376A1
Принадлежит: Texas Instruments Inc

A method includes receiving, by a L2 controller, a request to perform a global operation on a L2 cache and preventing new blocking transactions from entering a pipeline coupled to the L2 cache while permitting new non-blocking transactions to enter the pipeline. Blocking transactions include read transactions and non-victim write transactions. Non-blocking transactions include response transactions, snoop transactions, and victim transactions. The method further includes, in response to an indication that the pipeline does not contain any pending blocking transactions, preventing new snoop transactions from entering the pipeline while permitting new response transactions and victim transactions to enter the pipeline; in response to an indication that the pipeline does not contain any pending snoop transactions, preventing, all new transactions from entering the pipeline; and, in response to an indication that the pipeline does not contain any pending transactions, performing the global operation on the L2 cache.

Подробнее
17-10-2023 дата публикации

Hardware coherence signaling protocol

Номер: US11789868B2
Принадлежит: Texas Instruments Inc

An apparatus includes a CPU core and a L1 cache subsystem including a L1 main cache, a L1 victim cache, and a L1 controller. The apparatus includes a L2 cache subsystem including a L2 main cache, a shadow L1 main cache, a shadow L1 victim cache, and a L2 controller configured to receive a read request from the L1 controller as a single transaction. Read request includes a read address, a first indication of an address and a coherence state of a cache line A to be moved from the L1 main cache to the L1 victim cache to allocate space for data returned in response to the read request, and a second indication of an address and a coherence state of a cache line B to be removed from the L1 victim cache in response to the cache line A being moved to the L1 victim cache.

Подробнее
19-12-2023 дата публикации

命令キャッシュにおけるプリフェッチの強制終了及び再開

Номер: JP2023179708A
Принадлежит: Texas Instruments Inc

【課題】メモリキャッシュのプリフェッチを制御する装置及びシステムオンチップ(SoC)を提供する。【解決手段】プロセッサ100は、CPUコア102、第1、第2のメモリキャッシュ130、155及びメモリコントローラサブシステム101を含む。メモリコントローラサブシステムは、第1のメモリキャッシュ130における仮想アドレスのヒット又はミス状況を推論的に判定し、仮想アドレスを物理アドレスに推論的に変換する。メモリコントローラサブシステムは、ヒット又はミス状況と物理アドレスとに関連して、ステータスを有効状態に構成する。仮想アドレスに関連するプログラム命令が不要であるとのCPUコアからの第1のインジケーションの受領に応答して、ステータスを無効状態に再構成し、仮想アドレスに関連するプログラム命令が必要とされるとのCPUコアからの第2のインジケーションの受領に応答して、ステータスを有効状態に再構成する。【選択図】図1

Подробнее
07-05-2024 дата публикации

Prefetch kill and revival in an instruction cache

Номер: US11977491B2
Принадлежит: Texas Instruments Inc

A system comprises a processor including a CPU core, first and second memory caches, and a memory controller subsystem. The memory controller subsystem speculatively determines a hit or miss condition of a virtual address in the first memory cache and speculatively translates the virtual address to a physical address. Associated with the hit or miss condition and the physical address, the memory controller subsystem configures a status to a valid state. Responsive to receipt of a first indication from the CPU core that no program instructions associated with the virtual address are needed, the memory controller subsystem reconfigures the status to an invalid state and, responsive to receipt of a second indication from the CPU core that a program instruction associated with the virtual address is needed, the memory controller subsystem reconfigures the status back to a valid state.

Подробнее
16-12-2021 дата публикации

Shadow caches for level 2 cache controller

Номер: US20210390050A1
Принадлежит: Texas Instruments Inc

An apparatus including a CPU core and a L1 cache subsystem coupled to the CPU core. The L1 cache subsystem includes a L1 main cache, a L1 victim cache, and a L1 controller. The apparatus includes a L2 cache subsystem coupled to the L1 cache subsystem. The L2 cache subsystem includes a L2 main cache, a shadow L1 main cache, a shadow L1 victim cache, and a L2 controller. The L2 controller receives an indication from the L1 controller that a cache line A is being relocated from the L1 main cache to the L1 victim cache; in response to the indication, update the shadow L1 main cache to reflect that the cache line A is no longer located in the L1 main cache; and in response to the indication, update the shadow L1 victim cache to reflect that the cache line A is located in the L1 victim cache.

Подробнее
16-12-2021 дата публикации

Hardware coherence for memory controller

Номер: US20210390051A1
Принадлежит: Texas Instruments Inc

A system includes a non-coherent component; a coherent, non-caching component; a coherent, caching component; and a level two (L2) cache subsystem coupled to the non-coherent component, the coherent, non-caching component, and the coherent, caching component. The L2 cache subsystem includes a L2 cache; a shadow level one (L1) main cache; a shadow L1 victim cache; and a L2 controller. The L2 controller is configured to receive and process a first transaction from the non-coherent component; receive and process a second transaction from the coherent, non-caching component; and receive and process a third transaction from the coherent, caching component.

Подробнее
21-03-2024 дата публикации

Cache size change

Номер: US20240095169A1
Принадлежит: Texas Instruments Inc

A method includes determining, by a level one (L1) controller, to change a size of a L1 main cache; servicing, by the L1 controller, pending read requests and pending write requests from a central processing unit (CPU) core; stalling, by the L1 controller, new read requests and new write requests from the CPU core; writing back and invalidating, by the L1 controller, the L1 main cache. The method also includes receiving, by a level two (L2) controller, an indication that the L1 main cache has been invalidated and, in response, flushing a pipeline of the L2 controller; in response to the pipeline being flushed, stalling, by the L2 controller, requests received from any master; reinitializing, by the L2 controller, a shadow L1 main cache. Reinitializing includes clearing previous contents of the shadow L1 main cache and changing the size of the shadow L1 main cache.

Подробнее
08-02-2024 дата публикации

Hardware coherence signaling protocol

Номер: US20240045803A1
Принадлежит: Texas Instruments Inc

An apparatus includes a CPU core and a L1 cache subsystem including a L1 main cache, a L1 victim cache, and a L1 controller. The apparatus includes a L2 cache subsystem including a L2 main cache, a shadow L1 main cache, a shadow L1 victim cache, and a L2 controller configured to receive a read request from the L1 controller as a single transaction. Read request includes a read address, a first indication of an address and a coherence state of a cache line A to be moved from the L1 main cache to the L1 victim cache to allocate space for data returned in response to the read request, and a second indication of an address and a coherence state of a cache line B to be removed from the L1 victim cache in response to the cache line A being moved to the L1 victim cache.

Подробнее
19-10-2023 дата публикации

Hardware coherence for memory controller

Номер: US20230333982A1
Принадлежит: Texas Instruments Inc

A system includes a non-coherent component; a coherent, non-caching component; a coherent, caching component; and a level two (L2) cache subsystem coupled to the non-coherent component, the coherent, non-caching component, and the coherent, caching component. The L2 cache subsystem includes a L2 cache; a shadow level one (L1) main cache; a shadow L1 victim cache; and a L2 controller. The L2 controller is configured to receive and process a first transaction from the non-coherent component; receive and process a second transaction from the coherent, non-caching component; and receive and process a third transaction from the coherent, caching component.

Подробнее
28-07-2022 дата публикации

Tag update bus for updated coherence state

Номер: US20220237122A1
Принадлежит: Texas Instruments Inc

An apparatus includes a CPU core and a L1 cache subsystem including a L1 main cache, a L1 victim cache, and a L1 controller. The apparatus includes a L2 cache subsystem coupled to the L1 cache subsystem by a transaction bus and a tag update bus. The L2 cache subsystem includes a L2 main cache, a shadow L1 main cache, a shadow L1 victim cache, and a L2 controller. The L2 controller receives a message from the L1 controller over the tag update bus, including a valid signal, an address, and a coherence state. In response to the valid signal being asserted, the L2 controller identifies an entry in the shadow L1 main cache or the shadow L1 victim cache having an address corresponding to the address of the message and updates a coherence state of the identified entry to be the coherence state of the message.

Подробнее
09-05-2024 дата публикации

Controller with caching and non-caching modes

Номер: US20240152385A1
Принадлежит: Texas Instruments Inc

An apparatus includes a CPU core, a first cache subsystem coupled to the CPU core, and a second memory coupled to the cache subsystem. The first cache subsystem includes a configuration register, a first memory, and a controller. The controller is configured to: receive a request directed to an address in the second memory and, in response to the configuration register having a first value, operate in a non-caching mode. In the non-caching mode, the controller is configured to provide the request to the second memory without caching data returned by the request in the first memory. In response to the configuration register having a second value, the controller is configured to operate in a caching mode. In the caching mode the controller is configured to provide the request to the second memory and cache data returned by the request in the first memory.

Подробнее
03-12-2020 дата публикации

Pipelined read-modify-write operations in cache memory

Номер: WO2020243102A1

In described examples, a processor system includes a processor core that generates memory write requests, a cache memory (304), and a memory pipeline of the cache memory (304). The memory pipeline has a holding buffer (306), an anchor stage (302), and an RMW pipeline (300). The anchor stage (302) determines whether a data payload of a write request corresponds to a partial write. If so, the data payload is written to the holding buffer (306) and conforming data is read from a corresponding cache memory (304) address to merge with the data payload. The RMW pipeline (300) has a merge stage (312) and a syndrome generation stage (314). The merge stage (312) merges the data payload in the holding buffer (306) with the conforming data to make merged data. The syndrome generation stage (314) generates an ECC syndrome using the merged data. The memory pipeline writes the data payload and ECC syndrome to the cache memory (304).

Подробнее