Last year Japan’s NHK conducted live 8k broadcast tests at the Rio Olympics opening ceremony in preparation for its plan to fully broadcast the Tokyo Summer 2020 Olympics. As an ultra-high definition, data-hungry new format, questions have ranged from the affordability and availability of consumer devices to watch the content on, to whether the infrastructure exists to support the amount of data compression required. As a result, entire industries are playing ‘catch up’ as the world watches on.
One of the chief benefits of 8k is the incredible clarity of images – something that has been welcomed with open arms by organisations seeking to accurately identify or protect individuals.
In the case of identification, body-worn cameras are becoming standard across UK police forces as they seek to secure faster, more accurate convictions. The Metropolitan Police alone is rolling out cameras to all 22,000 front line officers by 2018. 8k becomes especially useful in situations involving large crowds, where the ability to zoom in on individuals without degrading image quality could make the difference between accurate identification or not.
From a protection point of view, cameras are commonplace in most settings that serve vulnerable people – including care homes, prisons and hospitals. The UK Government is extending this to animals through its plans to mandate the use of CCTV in abattoirs across England as a means to ensure welfare guidelines are met.
The Challenges of Even Bigger Data
Against this background of on-screen entertainment potentially surpassing the need for VR headsets, and citizens being recorded in almost all public places (statistics vary but the British Security Industry Association estimates between 4m and 5.9m CCTV cameras are in use in the UK) there is one common challenge: how to store all this data.
The conversation about storing and managing increasing amounts of data has been long running, but we now need to factor in that the nature of 8k data (which is 7x the density of ordinary data) is accelerating storage space demands faster than originally anticipated. With budgets unlikely to grow at anything near the rate of data growth, organisations will become increasingly squeezed between the data they want to keep, the data legislation requires they retain, and the storage capacity they have.
Use it or Lose it?
In some cases, data will be erased after a set time (assuming no incident has been reported). The table below describes how Transport for London (TfL) handles data captured by its cameras:
What is interesting to think about here is the reason behind the retention period in each case – TfL only retains images for a very short amount of time. Is it a case of corporate policy or available storage space? Is the decision to time-limit storage related to privacy legislation or is it driven by the cost of holding on to data that is exponentially growing?
London Underground alone transports 1.37bn passengers annually so when you consider the efforts of law enforcement agencies to track the movement of criminals and suspects, it is possible that data that has been erased could have been extremely valuable beyond TfL’s expiration date.
With machine learning making analysis of vast data repositories (such as finding a face in hours of crowd footage) easier, the issue of filtering data becomes less of a challenge – what comes to the fore is a requirement to make the storage of massive data sets more economical.
This is one example of how lowering storage costs could be helpful in enabling an organisation to extend its capabilities. On the flip side we have data retention rules that could extend over years, or even decades, such as evidence in criminal cases; meeting legislation such as the GDPR; or retaining company IP as part of a corporate business.
The story of storage has not followed the classic technology arc – where newer technologies make legacy systems obsolete. Instead we’ve seen innovation combined with a renaissance where tape has proved itself (in partnership with flash) as a solution that overcomes challenges in a more economically viable way than other storage disciplines. The below offers a brief breakdown of the benefits:
- Energy consumption is up to 76 times less than disk1
- Up to 90% TCO reduction can be achieved with technology such as IBM Spectrum Archive2
- Achieve 30 PBs of storage in 10sq ft of floor space3
- Data portability for disaster recovery – tapes are portable and can easily be transferred
- Secure multi-tenancy – virtual or physical
- Networks threats can be countered with offline storage – also ideal for when data needs to be stored, but not retrieved
Tape has proven to offer the lowest TCO by far as illustrated by this extract from the Wikibon Case Study tool. With flash + tape (Flape) still offering a 53% TCO saving over disk, this combination is proving to be a robust solution that offers great scalability and searchability and a sound fit as new forms of data enter the market.
Faster, Smarter, Future Facing
So how has tape moved on? It’s become a lot faster for starters – modern digital tape can now stream data at 500Mbps.
With flash + tape, organisations can make smarter decisions on data classification – deciding what needs to reside on flash for rapid retrieval and what can stay on tape for long-term storage. With solutions such as IBM Spectrum Scale, lifecycle management policies can be applied to automatically work out when to use flash and when to use tape.
As for future-facing – another advancement in the digital tape arena is thumbnail navigation (where a thumbnail could contain for example one minute’s footage). We believe this will prove revolutionary for organisations seeking to manage and retrieve imagery whether they want to entertain us or protect us.
Faster, smarter and future facing. Tape is the winner of the 8k storage race.
- Clipper Group Continuing the Search for the Right Mix of Long-Term Storage Infrastructure — A TCO Analysis of Disk and Tape Solutions July 2015
- IBM analysis
- IBM TS4500 tape library with 3:1 compression