Yesterday, 12:51 PM
Under the smokescreen, it is about changing the internal model.
For years, the way ISPs have provided connectivity is through a network of contractual service level agreements with infrastructure owners. Those agreements are essentially, "you will receive all data we send to your customers as downloads, for free, and we will take all data your customers send as uploads, for a metered cost to you, and get it where it's going". The asymmetry of this is cost-balanced to provide a data package for the customer by the ISP. That's why so many connections have huge download speeds, but limited upload capacity.
This worked well for years, when most data collection and information value creation was done by collecting server-side metrics of end-user activity, for market and data reselling aggregation. The server would log requests, and back-end aggregate that, without using any of the customer's upload bandwidth to do so, other than for queries and tracking identifiers. The incentive for discouraging customer uploaded data was aligned, because the back end for metadata generation and incorporation into a larger information corpus would be overwhelmed with too much data.
That's changed, though, with the move to client-side rendering and improvements in endpoint telemetry collection, as well as things like face identification and voice recognition cued ai models. Those require much more upload bandwidth to get the collected data from the customer to the network, and that will be the case for at least the next 10-15 years, until endpoint capacity, storage, and delivery optimization balances things out.
So the incentives are misaligned at the infrastructure delivery level. The contractual incentive of discouraging upload needs inverted, within the structure of inter-isp business relationships. This should all happen "behind the scenes", and be presented to the customer as "improving quality". Arguments about net-neutrality need to be understood through this lens, otherwise they make no sense.
For years, the way ISPs have provided connectivity is through a network of contractual service level agreements with infrastructure owners. Those agreements are essentially, "you will receive all data we send to your customers as downloads, for free, and we will take all data your customers send as uploads, for a metered cost to you, and get it where it's going". The asymmetry of this is cost-balanced to provide a data package for the customer by the ISP. That's why so many connections have huge download speeds, but limited upload capacity.
This worked well for years, when most data collection and information value creation was done by collecting server-side metrics of end-user activity, for market and data reselling aggregation. The server would log requests, and back-end aggregate that, without using any of the customer's upload bandwidth to do so, other than for queries and tracking identifiers. The incentive for discouraging customer uploaded data was aligned, because the back end for metadata generation and incorporation into a larger information corpus would be overwhelmed with too much data.
That's changed, though, with the move to client-side rendering and improvements in endpoint telemetry collection, as well as things like face identification and voice recognition cued ai models. Those require much more upload bandwidth to get the collected data from the customer to the network, and that will be the case for at least the next 10-15 years, until endpoint capacity, storage, and delivery optimization balances things out.
So the incentives are misaligned at the infrastructure delivery level. The contractual incentive of discouraging upload needs inverted, within the structure of inter-isp business relationships. This should all happen "behind the scenes", and be presented to the customer as "improving quality". Arguments about net-neutrality need to be understood through this lens, otherwise they make no sense.