Cloud

Why Memory-Centric Architecture Is The Future Of In-Memory Computing


Loading data into memory before processing it has always been the way that programs in the past have utilized RAM. One of the things that we’ve learned over the years is that having a database in memory makes for the fastest level of performance possible. According to Aleahmad et al. based on their studies in 2006, in-memory databases tended to perform more efficiently with large data sets than on-disk systems.

As the corporate world starts moving towards Big Data and IoT as crucial parts of their IT strategy, the need for efficient database systems that can handle vast amounts of streaming data is becoming critical. Many of these corporations have turned to in-memory computing (IMC) to meet the needs of their database processing needs.

The Development of In-Memory Computing

IMC was created as a response to the need for up-to-date information from data sources to facilitate corporate decision-making. In the earliest days of corporate database architecture, the standard database was an analytical database (OLAP) which imported data from a transactional database (OLTP) after running it through ETL operations every so often to make it palatable to the analytical database. IMC was designed to combine these systems into a hybrid transactional/analytical system (HTAP) allowing …

Read More on Datafloq

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *