TECHNOLOGY/BUSINESS OPPORTUNITY ACTIVE MEMORY DATA REORGANIZATION ENGINE

Agency: Department of Energy
State: California
Level of Government: Federal
Category:
  • A - Research and development
Opps ID: NBD00159149396589660
Posted Date: Sep 28, 2017
Due Date: Oct 26, 2017
Solicitation No: FBO311-16
Source: Members Only
Solicitation Number :
FBO311-16
Notice Type :
Special Notice
Synopsis :
Added: Sep 26, 2017 6:22 pm

TECHNOLOGY/BUSINESS OPPORTUNITY


ACTIVE MEMORY DATA REORGANIZATION ENGINE



Opportunity :


Lawrence Livermore National Laboratory (LLNL), operated by the Lawrence Livermore National Security (LLNS), LLC under contract no. DE-AC52-07NA27344 (Contract 44) with the U.S. Department of Energy (DOE), is offering the opportunity to enter into a collaborative partnership to further research, develop, and commercialize this technology.

Background
:


The advent of many-core processors with a greatly reduced amount of per-core memory has shifted the bottleneck in computing from FLOP s to memory. A new, complex memory/storage hierarchy is emerging, with persistent memories offering greatly expanded capacity, and augmented by DRAM / SRAM cache and scratchpads to mitigate latency.



With active memory, computation that is typically handled by a CPU is performed within the memory system. Performance is improved and energy reduced because processing is done in proximity to the data without incurring the overhead of moving the data across chip interconnects from memory to the processor. Emerging 3D memory packaging technology offers new opportunities for computing near memory. Using this technology, a separate logic layer added to the memory stack holds compute elements that operate directly on data in the memory package itself. The benefits of this approach are two-fold. First, the amount of data transferred from the memory chips to the CPU can be reduced because computation can occur in the logic layer. As a second important benefit, computing near memory exploits the orders of magnitude greater bandwidth within the 3D package than is available off-chip.



Many data-centric applications need to perform operations like search, filter, and data reorganization, across a large cross section of data. Using traditional architectures, the data must be moved from storage to memory and then funneled through the CPU . Data-centric operations are ideal for off-load to a memory system with processing capability. These operations can be categorized into three types that can be used independently or in conjunction with the others.



LLNL has an active research program in memory-centric architectures (see more at https://computation.llnl.gov/projects/memory-centric-architectures ). LLNL's research program focuses on transforming the memory-storage interface with three complementary approaches:


--Active memory and storage in which processing is shared between CPU and in-memory/storage controllers,


--Efficient software cache and scratchpad management, enabling memory-mapped access to large, local persistent stores,


--Algorithms and applications that provide a latency-tolerant, throughput-driven, massively concurrent computation model.



As part of our on-going work, LLNL researchers quantitatively evaluate potential benefits of active memory that may be possible with 3D packaging of memory with logic. From this research program, LLNL has developed a new active memory data reorganization engine.

Description
:


LLNL has developed a new active memory data reorganization engine. In the simplest case, data can be reorganized within the memory system to present a new view of the data. The new view may be a subset or a rearrangement of the original data. As an example, an array of structures might be more efficiently accessed by a CPU as a structure of arrays. Active memory can assemble an alternative representation within the memory package so that bytes sent to the main CPU are in a cache-friendly layout.

Potential Applications


Possible applications include in-memory graph traversal, efficient sparse matrix access for computations on irregular meshes, in-memory, and streaming assembly of multiple resolution image windows of high resolution imagery.

Development Status


LLNL has a published patent application " Near Memory Data Reorganization Engine "for this invention and is seeking collaborators to further develop and commercialize the technology.

LLNL is seeking industry partners with a demonstrated ability to bring such inventions to the market. Moving critical technology beyond the Laboratory to the commercial world helps our licensees gain a competitive edge in the marketplace. All licensing activities are conducted under policies relating to the strict nondisclosure of company proprietary information
.

Please visit the IPO website at
https://ipo.llnl.gov/resources for more information on working with LLNL and the industrial partnering and technology transfer process.


Note: THIS IS NOT A PROCUREMENT. Companies interested in commercializing LLNL's Active Memory Data Reorganization Engine technology should provide a written statement of interest, which includes the following:


1. Company Name and address.


2. The name, address, and telephone number of a point of contact.


3. A description of corporate expertise and facilities relevant to commercializing this technology.


Written responses should be directed to:


Lawrence Livermore National Laboratory


Industrial Partnerships Office


P.O. Box 808, L-795


Livermore, CA 94551-0808


Attention: FBO 311-16


Please provide your written statement within thirty (30) days from the date this announcement is published to ensure consideration of your interest in LLNL's Active Memory Data Reorganization Engine technology business opportunity.


Contracting Office Address :
7000 East Avenue
L-795
Livermore, California 94550
Primary Point of Contact. :
Connie L Pitcock
Phone: 925-422-1072
Fax: 925-423-8988

TRY FOR FREE

Not a USAOPPS Member Yet?

Get unlimited access to thousands of active local, state and federal government bids and awards in All 50 States.

Start Free Trial Today >