Soon, internet without servers
Scientists have designed a revolutionary architecture that aims to make the internet more "social" by eliminating the need to connect to servers and enabling all content to be accessed on a peer-to-peer basis.
The prototype, which has been developed as part of an EU-funded project called "Pursuit," is being put forward as a proof-of concept model for overhauling the existing structure of the internet's IP layer, through which isolated networks are connected, or "internetworked."
The Pursuit Internet would enable a more socially-minded and intelligent system, in which users would be able to obtain information without needing direct access to the servers where content is initially stored.
Instead, individual computers would be able to copy and republish content on receipt, providing other users with the option to access data, or fragments of data, from a wide range of locations rather than the source itself.
Essentially, the model would enable all online content to be shared in a manner emulating the "peer-to-peer" approach taken by some file-sharing sites, but on an unprecedented, internet-wide scale.
That would potentially make the internet faster, more efficient, and more capable of withstanding rapidly escalating levels of global user demand.
It would also make information delivery almost immune to server crashes, and significantly enhance the ability of users to control access to their private information online.
While this would lead to an even wider dispersal of online materials than we experience now, however, the researchers behind the project also argue that by focusing on information rather than the web addresses (URLs) where it is stored, digital content would become more secure.
They envisage that by making individual bits of data recognisable, that data could be "fingerprinted" to show that it comes from an authorised source.
Technically, online searches would stop looking for URLs (the Uniform Resource Locator) and start looking for URIs (Uniform Resource Identifiers), explained Dr Dirk Trossen, a senior researcher at the University of Cambridge Computer Lab, and the technical manager for Pursuit.
In simple terms, these would be highly specific identifiers which enable the system to work out what the information or content is.
This has the potential to revolutionise the way in which information is routed and forwarded online.
thanks to-
Hindustantimes
Scientists have designed a revolutionary architecture that aims to make the internet more "social" by eliminating the need to connect to servers and enabling all content to be accessed on a peer-to-peer basis.
The prototype, which has been developed as part of an EU-funded project called "Pursuit," is being put forward as a proof-of concept model for overhauling the existing structure of the internet's IP layer, through which isolated networks are connected, or "internetworked."
The Pursuit Internet would enable a more socially-minded and intelligent system, in which users would be able to obtain information without needing direct access to the servers where content is initially stored.
Instead, individual computers would be able to copy and republish content on receipt, providing other users with the option to access data, or fragments of data, from a wide range of locations rather than the source itself.
Essentially, the model would enable all online content to be shared in a manner emulating the "peer-to-peer" approach taken by some file-sharing sites, but on an unprecedented, internet-wide scale.
That would potentially make the internet faster, more efficient, and more capable of withstanding rapidly escalating levels of global user demand.
It would also make information delivery almost immune to server crashes, and significantly enhance the ability of users to control access to their private information online.
While this would lead to an even wider dispersal of online materials than we experience now, however, the researchers behind the project also argue that by focusing on information rather than the web addresses (URLs) where it is stored, digital content would become more secure.
They envisage that by making individual bits of data recognisable, that data could be "fingerprinted" to show that it comes from an authorised source.
Technically, online searches would stop looking for URLs (the Uniform Resource Locator) and start looking for URIs (Uniform Resource Identifiers), explained Dr Dirk Trossen, a senior researcher at the University of Cambridge Computer Lab, and the technical manager for Pursuit.
In simple terms, these would be highly specific identifiers which enable the system to work out what the information or content is.
This has the potential to revolutionise the way in which information is routed and forwarded online.
thanks to-
Hindustantimes
No comments:
Post a Comment