Framework for Big Data applications on Server Schema
Big data needs to be considered in terms of how the data will be manipulated. The size of the data set will impact data capture, movement, storage, processing, presentation, analytics, reporting, and latency. Traditional tools quickly can become overwhelmed by the large volume of big data. Latency—the time it takes to access the data—is an important consideration as volume. Suppose to run an ad hoc query against the large dataset or a predefined report, a large data storage system is not a data warehouse, however, and it may not respond to queries in a few seconds. It is, rather, the organization-wide repository that stores all of its data and is the system that feeds into the data warehouses for management reporting. One solution to the problems presented by very large data sets might be to discard parts of the
data so as to reduce data volume, but this is not always practical. Regulations might require that data be stored for a number of years, or competitive pressure could force to save everything. The characterization results vary with the Choice of the type of servers we choose, big vs little core-based server for energy-efficiency is significantly influenced by the size of data, performance constraints and presence of an accelerator. Furthermore, the micro architecture-level analysis gives us a clear picture of the analysis part that is much needed on the server architectures.
Keywords: Performance, Power, Characterization, Big Data, High-Performance server, Low-Power server, Accelerator
Citation: *, ( 2018), Framework for Big Data applications on Server Schema. Scientific Transactions in Environment and Technovation Journal(STET), 11(4): 188-195
Received: 06/27/2017; Accepted: 05/14/2018;