Richard Clegg Explaining What's Wrong With Computer Science
Here's part I:
See https://www.iso.org/about:
ISO, the International Organization for Standardization, brings global experts together to agree on the best way of doing things ... ISO is the short name for the International Organization for Standardization. It’s not an acronym, but a name inspired by the Greek word isos, meaning “equal” – reflecting our mission to create standards that ensure consistency and equality worldwide. Because the organization’s full name – and its initials – would vary across languages (for example Organisation internationale de normalisation in French), our founders chose “ISO” as a universal short form that could be recognized globally, regardless of language.
3:34 There was more than this though. The ITU defined X.25 in the early seventies. The problem was that it was too much to implement because it did all sorts of stuff that only a national telecoms provider would need.
It's not computer science, it's the history of stupid!
2:26 Google don't need to drive around in their street-car to do this, because they have billions of Android phones monitoring WiFi networks, and being carried around by users with their GPS location going to Google maps.
This model is actually abstract, because it does not need to be realized over physical networks at all. See TUN/TAP Virtual Networks.
Subscribe to Computerphile.
The difference between concretion and abstraction is illustrated by this C program which in some sense describes itself:
main() { char *s="main() { char *s=%c%s%c; printf(s,34,s,34); }%c"; printf(s,34,s,34,10); }
But to know that this is the case you need to know a lot about the C programming language, and the behaviour of the printf function. Contrast that with this grammar, written by Tom Ridge, which also describes itself:
The difference is that this grammar could be translated into Chinese, by someone who understands Chinese, and it would still be recognisable to someone who doesn't know Chinese, but knows that it is supposed to be the same grammar. Indeed, knowing that would allow such a person to deduce something about the meaning of Chinese symbols.
Such a grammar need not only describe itself, but it can also describe another class of grammar, for example, a context-dependent one such as that of the C language. So one could formally define a C compiler starting from this grammar, and using that definition, one could automatically generate a C compiler for any CPU, whether physical or virtual. One could therefore define the entire protocol stack of a network formally and that would be a 'top-down' description of networking: one which assumes only the most basic fact, which is the idea of representation of syntactic forms "I've thought about this for years and years and years and I don't know why there should be an invisible syntactical intelligence giving language lessons in hyperspace.":
What is not clear from this is that the realization of this language is sensory (audio/visual):
See Switch Angel - Vibe Coding Without AI.
Subscribe to We Plants Are Happy Plants.
See also Emily Riehl and Terrence Tau on The Future of Mathematics and About Logic Interview with Graham Priest. In particular, see On the Origins of Bisimulation and Coinduction by Davide Sangiorgi:
In Computer Science, and in Philosophical Logic, and in Set Theory, bisimulation has been derived through refinements of notions of morphism between algebraic structures. Roughly, morphisms are maps that are “structure-preserving”. The notion is therefore fundamental in all mathematical theories in which the objects of study have some kind of structure, or algebra. The most basic forms of morphism are the homomorphisms. These essentially give us a way of embedding a structure (the source) into another one (the target), so that all the relations in the source are present in the target. The converse however, need not be true; for this, stronger notions of morphism are needed. One such notion is isomorphism, which is however extremely strong—isomorphic structures must be essentially the same, i.e., “algebraically identical”. It is a quest for notions in between homomorphism and isomorphism that led to the discovery of bisimulation.
The lean theorem prover is written in the lean language which translates lean code into C. So the semantics of lean proofs are ultimately given by the CPU specification and its interpretation by the writers of the C compilers. This can be quite a complex process: 28:52 see the interview with Kevin Buzzard in About Logic - Theorem Proving, Constructive Mathematics and Type Theory.
Subscribe to Cppcon.
Comments
Post a Comment