Tag cspaper

3 bookmarks have this tag.

2024-10-04

34.

Resilient Microservice Applications, by Design, and without the Chaos

christophermeiklejohn.com/publications/cmeiklej_phd_s3d_2024.pdf

Christopher S. Meiklejohn
CMU-S3D-24-104
May 2024
Software and Societal Systems
School of Computer Scienc

2024-07-15

20.

A History of Clojure

dl.acm.org/doi/pdf/10.1145/3386321

Clojure was designed to be a general-purpose, practical functional language, suitable for use by professionals
wherever its host language, e.g., Java, would be. Initially designed in 2005 and released in 2007, Clojure is
a dialect of Lisp, but is not a direct descendant of any prior Lisp. It complements programming with pure
functions of immutable data with concurrency-safe state management constructs that support writing correct
multithreaded programs without the complexity of mutex locks.
Clojure is intentionally hosted, in that it compiles to and runs on the runtime of another language, such as
the JVM. This is more than an implementation strategy; numerous features ensure that programs written in
Clojure can leverage and interoperate with the libraries of the host language directly and efficiently.
In spite of combining two (at the time) rather unpopular ideas, functional programming and Lisp, Clojure has
since seen adoption in industries as diverse as finance, climate science, retail, databases, analytics, publishing,
healthcare, advertising and genomics, and by consultancies and startups worldwide, much to the career-altering
surprise of its author.
Most of the ideas in Clojure were not novel, but their combination puts Clojure in a unique spot in language
design (functional, hosted, Lisp). This paper recounts the motivation behind the initial development of Clojure
and the rationale for various design decisions and language constructs. It then covers its evolution subsequent
to release and adoption

2024-06-30

10.

ChatGPT is bullshit - Ethics and Information Technology

link.springer.com/article/10.1007/s10676-024-09775-5

Recently, there has been considerable interest in large language models: machine learning systems which produce human-like text and dialogue. Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called “AI hallucinations”. We argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the models are in an important way indifferent to the truth of their outputs. We distinguish two ways in which the models can be said to be bullshitters, and argue that they clearly meet at least one of these definitions. We further argue that describing AI misrepresentations as bullshit is both a more useful and more accurate way of predicting and discussing the behaviour of these systems.