Categories
Writing

Dromology in the Age of Synthetic Cognition

Paul Virilio was a French cultural theorist well known for his theory of dromology. Dromology explores the logics and impacts of speed in the modern era. At its core, it theorizes how the velocity of action or decision-making enables actors to accrue wealth and power over others. Virilio often approached this concept through the lens of martial power, contemplating how new means of movement — the horse, the automobile, telemetric control — created new capacities to overcome the frictions of time and space, and to overcome adversaries through heightened sensing and accelerated decision-making.

We exist in an era of digital intensification. Cybernetic systems are now core to many people’s daily realities, including systems over which they have little meaningful influence or control.1 Earlier digital modernity was often described as an “attention economy.” Today, we may be entering what I’ll call a “velocity economy,” which is increasingly grappling with the implications of a faster-moving world.

Escape Velocities

Om Malik has written recently on velocity and how it may now precede attention as a structuring condition:

What matters now is how fast something moves through the network: how quickly it is clicked, shared, quoted, replied to, remixed, and replaced. In a system tuned for speed, authority is ornamental. The network rewards motion first and judgment later, if ever. Perhaps that’s why you feel you can’t discern between truths, half-truths, and lies.

Algorithms on YouTube, Facebook, TikTok, Instagram, and Twitter do not optimize for truth or depth. They optimize for motion. A piece that moves fast is considered “good.” A piece that hesitates disappears. There are almost no second chances online because the stream does not look back. People are not failing the platforms. People are behaving exactly as the platforms reward. We might think we are better, but we have the same rat-reward brain.

When velocity becomes the scarcest resource, everything orients around it. This is why it’s wrong to think of “the algorithm” as some quirky technical layer that can be toggled on and off or worked around. The algorithm is the culture. It decides what gets amplified, who gets to make a living, and what counts as “success.”

Once velocity is the prize, quality becomes risky. Thoughtfulness takes time. Reporting takes time. Living with a product or an idea takes time. Yet the window for relevance keeps shrinking, and the penalty for lateness is erasure. We get a culture optimized for first takes, not best takes. The network doesn’t ask if something is correct or durable, only if it moves. If it moves, the system will find a way to monetize it.2

The creation and publication of content — and the efforts to manipulate engagement metrics to juice algorithms — have long been partially automated. Bot and content farms are not new. What may be new is the scale and ease of synthesis. As the cost of producing text, images, summaries, and responses to each declines through the widespread adoption of LLMs and agentic systems, the volume of generated material increases dramatically.

That increase in volume does not just mean “more noise.” It alters competitive dynamics and means that velocity — which then accrues attention — becomes key in an algorithmically intermediated world. In this environment what is increasingly put under pressure are decisional latencies — the time between sensing, synthesizing, and acting. And humans are making decisions on what to focus on based on automations and algorithms designed to cull out what they “should” be paying attention to.

Earlier digital acceleration primarily affected distribution: messages moved faster, and telemetrics enabled the expression of power at heightened distances, as examples. Now we may be witnessing the acceleration of what looks like cognition. LLMs have no theory of mind insofar as they do not “understand” in any human sense. Yet they can synthesize, summarize, categorize, and prioritize at speeds that mimic cognitive activity. And when those synthesized outputs are connected to agentic systems capable of taking action — filing forms, executing transactions, triggering workflows — we move beyond accelerated messaging into accelerated execution. Decisional latencies can become compressed in order to produce outputs that move sufficiently fast, and with sufficient purchase, to be registered by algorithms as worthy of amplification and, ultimately, human attention.

Put differently: as velocity becomes a mode of capturing attention there is pressure to move more quickly in the face of other, similarly fast-moving outputs, and in ways that potentially exploit or game algorithms in an effort to obtain human attention.

New Velocity, New Harms

For Virilio, every accelerant technology carried with it a corresponding accident. The invention of the ship implied the shipwreck. The car led to the car crash. Radio and telecommunications enabled new forms of propaganda and coordinated deception. And so on.

LLMs and agentic systems may carry their own accident structures. They enable mass automated persuasion at scale. A flaw in a widely deployed foundation model could result in class-breaking errors replicated across applications dependent on that model.3

Agentic systems introduce further risks: cascading autonomous mis-executions, rapid propagation of flawed decisions, and compounding feedback loops that create significant problems before humans detect them.4

AI accidents have the potential to be more distributed and more simultaneous than prior automation failures. While automated systems have long-posed risks the generalized and cross-sector nature of foundation models could expand the blast radius of automated harms. When many systems rely on shared models or shared training data, correlated failures become more plausible.

Velocity, in this sense, does not merely amplify error; it compresses the window in which errors can be identified and corrected. It risks creating brittle systems and generating what Charles Perrow has called “normal accidents.”

Velocity and Organizational Impacts

If decisional latency becomes the friction to be minimized in a velocity economy, organizations may feel pressure to shorten analytic cycles and accelerate workflow tempos. In domains where speed confers agenda-setting power, organizations may need to move faster or risk marginalization.

At the same time, we might see a divide emerge. Some institutions may further prioritize velocity and first-mover visibility as a way to shape agendas. Others may deliberately preserve slower processes to protect legitimacy and safety. Friction — often treated as inefficiency — may be read as functioning as a source of institutional credibility.5 It may, also, be used by some organizations to justify their resistance to innovation and with the effect of falling behind other actors.

As information volume expands, organizations and individuals may increasingly depend on third-party systems to track, assess, and prioritize what is “meaningful.” LLMs and agentic systems may be paired with other automated triage systems designed to impose order on informational abundance.

Yet such sense-making is inherently lossy. The world is dense with detail, contingency, contradiction, and edge cases. When LLMs normalize information statistically, much of that raw specificity can be abstracted away. The effect can be that important context is never surfaced for human review; reliance on abstracted assessment systems to navigate a digitally intermediated world may entail a further loss of representational fidelity.

This abstraction is not unprecedented — humans have always distilled complexity — but the scale and automation of the distillation may be new. And as (or if) human review recedes the capacity to interrogate what has been smoothed over may diminish.

Organizations must also determine when they will introduce human review as well as when they will deliberately refrain from doing so. Prioritizing human assessment of all outputs could introduce friction that other organizations or jurisdictions may not demand. A majority-human-review organization may operate outside the dominant tempo of a velocity economy, with the end of potentially gaining legitimacy and safety while simultaneously sacrificing influence or timeliness.

Organizational Consequences of LLM and Agentic Velocity

If LLM- and agentic-enabled systems increase the rate at which information is generated and decisions are executed, several consequences may follow.

  1. The distribution of power may become linked to access to compute, to foundational models, to reliable data, and to the capacity to act digitally or physically. Countries that dominate the production — or regulation — of foundational models may accrue disproportionate influence. Where production and regulation of AI models or systems diverge between nation-states or geopolitical regions, conflicts over norms and authority may intensify.
  2. Organizations may need fast initial outputs to secure attention in a velocity-based information environment. However, rapid outputs need not be final outputs. Deeper analysis may continue in parallel, informing subsequent action and ensuring that longer-term activities based on such analysis remain well grounded in facts and aligned with strategic priorities. Organizations that excel at this two-track approach to knowledge production may gain strategic benefits in being able to set agendas as well as subsequently navigate them with complexity, depth, and institutional integrity.
  3. Where agentic systems are entrusted to make certain classes of judgments, institutions must determine under what conditions (and to what extent) they will add the friction of human oversight. The more friction introduced, the greater the potential divergence from competitors operating at full automation speed. At the same time, human-informed decision-making may confer benefits of perceived legitimacy and safety.
  4. Institutions must carefully consider how they can, and cannot, adopt LLMs and agentic systems so they are responsive to changes in the lived reality of the world while at the same time working to carefully protect social trust that they possess. There may be increased pressures on institutions to align their decisional horizons with machine-accelerated and innovation-driven time horizons, perhaps requiring shifts in decisions from slow and fixed in time, to more fast moving and subject to routine revisions. For bureaucratic organizations or institutions this could require major changes6 in decisional structures and processes.

Future Looking Velocity-Imposed Pressures

If we are to take Virilio’s insights seriously, along with changes in technological activity per Malik’s thoughts, then there are at least three tensions worth watching:

  1. Organizations with access to contemporary models may be able to move more quickly and accurately, with the effect of reducing the time delta in summarizing or producing information while compressing decisional cycles. At risk, however, is whether this elides the specificity that is reflective of the actual world and has the effect of delegitimizing actual decisions as a result of minimal (or insufficient) human oversight or governance. To what extent might LLM- and agentic-forward organizations make bad decisions more quickly and undermine their legitimacy? How much will access to contemporary models differentiate between organizations’ abilities to undertake rapid-pace sense-making and decision making?
  2. Epistemic pressures may worsen as synthetic media is produced at scale and automated intermediaries filter what humans encounter. What happens when your digital assistant, or one your organization relies on, has been sorting content for months, only for you to discover it has been amplifying propaganda because of model poisoning or bias you did not anticipate? What to do when the decisions you’ve been making have unknowingly been badly torqued to the advantage of other parties?
  3. Class-breaks that result in cascading failures become more plausible in monocultural model ecosystems. To what extent does widespread reliance on common foundation models create systemic points of failure that are difficult to detect, diagnose, or correct? Will this encourage the development of more ‘small models’ in an effort to stem or mitigate these kinds of security impacts?

Virilio suggested that speed restructures power. Malik suggests that velocity now structures visibility and attention. If LLMs and agentic systems compress not only communication but also enable synthetic cognition and decisional executions, then the next few years may test whether institutions can preserve legitimacy, trust, and factually-driven actions and decisions in a world increasingly oriented around motion.

It will be interesting to assess whether friction comes to be seen increasingly as an obstacle to wealth or power, or whether organizations that maintain appropriate degrees of friction preserve (or expand) their legitimacy relative to those that move quickly and break things.


  1. Examples include automated bots interacting with global capital markets, and the automated balancing of critical infrastructure systems to enable seamless continued services. ↩︎
  2. Emphasis not in original. ↩︎
  3. In computer security, a “class-break” refers to a vulnerability in a widely used underlying technology such that an exploit affecting one instantiation effectively compromises the entire class of systems built upon it. For example, a flaw in a common cryptographic library can render all software relying on that library vulnerable simultaneously. ↩︎
  4. If humans even ever do detect them… ↩︎
  5. While not taken up, here, this divide between moving quickly versus slowly may have interesting implications for agenda-setting windows, and the development and proposals of policy problems and solutions. ↩︎
  6. Perhaps even existential changes! ↩︎
Categories
Writing

The Failure to Frame Covid-19 Mobility Data

(Photo by Gabriel Meinert on Unsplash)

For the past year, the Toronto Star has repeatedly run articles that take mobility data from mobile device advertisers, to then assess the extent to which Torontonians are moving too much. Reporting has routinely shown how people are moving more or less frequently, with articles often suggesting that people are moving too much when they’re supposed to be staying put.

The problem? The ways in which ‘too much’ is assessed runs contrary to public health advice and lacks sufficient nuance to inform the public. In the most recent reporting, we find that:

Between Jan. 18 and Feb. 28, average mobility across Ontario increased from 58 per cent to 65 per cent, according to the marketing firm Environics Analytics. Environics defines mobility as a percentage of residents 15 or older who travelled 500 metres or more beyond their home postal code.

To be clear: in Ontario the provincial and local public health leaders have strongly stated that people should get outside and exercise. That can involve walking or other outdoor activities. Those activities are not supposed to be restricted to 500 metres from your home, which was advice that was largely provided in more restrictive lockdowns in European countries. And we know that mobility data is often higher in areas with higher percentages of BIPOC residents because they tend to have lower-paying jobs and must travel further to reach their places of employment.

As has become the norm, the fact that people have moved around more frequently as (admittedly ineffective) restrictions have been raised, and that people are ‘region hopping’ by going from more restricted zones to less restricted ones, is being tightly associated with personal or individual failures. From a quoted expert, we find that:

“It shows that once things start to open, people just seem to do whatever, and that’s a recipe for disaster.”

I would suggest that what we are seeing is a pent up, pretty normal, human response: the provincial government has behaved erratically and you have some people racing around to get stuff done before returning to another (ineffective) set of restrictions, and a related set of people who believe that if the government is letting them move around then things must be comparatively safer. To put it another way, in the former case you have people behaving rationally (if, in some eyes, selfishly) whereas in the latter you have a failure by government to solve a collective action problem by downloading responsibility to individuals. In both cases you are seeing an uptick in behaviour which is suggestive that they believe it’s safer to do things, now, than weren’t before when the government assumed some responsibility and signalled that moving was less safe and actively discouraged it by keeping businesses and other ‘fun’ things shut down.

Throughout the pandemic response in Ontario, what has been evident is that the provincial government simply cannot develop and implement effective policies to mitigate the spread of the pandemic. The result of muddling through things has been that the public, and especially small business, has suffered extraordinarily whilst the gains have been meagre. The lack of paid sick leave, as an example, has seriously stymied the ability of lower-income workers to actually keep themselves apart from others while they wait for diagnoses and, if positive, recover from their infections.

To be fair, the Toronto Star and other outlets have covered paid sick leave issues, along with lots of other failures by the provincial government in its handling of the pandemic. And there is certainly some obligation on individuals to best adhere to public health advice. But we’ve long known these are collective action problems: there is a need to move beyond downloading responsibility to individuals and for governments to behave effectively, coherently, and accountably throughout major crises. The provincial government has failed, and continues to fail, on every one of these measures to the effect that individuals are responding to the past, present, and expected future actions of the government: more unpredictability and more restrictions on their daily lives as a result of government ineptitude.

Whereas the journalists could have cast what Ontarians are doing as a semi-natural response to the aforementioned government failings, instead those individuals are being castigated. We shouldn’t be blaming the victims of the pandemic, but I guess that’s what happens when assessing mobility data.