IBM has strengthened its experimental file system called General Parallel File System. It is now capable of processing 10 billion data in 43 minutes. IBM shows a new record with his experimental system refreshed file called General Parallel File System (GPFS).
Called faster but also more modular architecture based on this protocol parallel transmission optimizes storage and data analysis. Developed since 1998, GPFS is designed to meet the needs “of banking systems and financial analysis, science”, according to Doug Balog, vice president of storage at IBM.
It is now capable of processing 10 billion data in 43 minutes, thanks to its 8-core systems 10 coupled to technology SSD (solid state drive). In 2007, the fastest storage architectures needed nearly three hours to perform the same task.
“This breakthrough takes the form of the unification of the working environments on a single platform, rather than having to operate on multiple systems managed separately,” said Bruce Hillsberg, IBM Storage Manager, said in the statement. For businesses, the General Parallel File System (GPFS) will aim to significantly accelerate the transfer and data management.
Locally as in the cloud, large corporations generate an increasing volume of data they must analyze quickly and efficiently. In terms of Internet traffic, the era of zettabyte (2 70 bytes) is fast approaching Cisco talks about Exabytes or 2 60 bytes from 2012