It followed a phase, where I was looking at the CERN particle trajectory data David had given me from the Transpositions project, perhaps creating strange interpretations of these trajectories. In the end it was David who used this kind of data for his video works. I came to a point where I lost the understanding why I was juggling with these files, I simply couldn't find an entry point that caused an intrinsic connection, that directly resonated with me.

{group: CERN}

#### Hough

Early on in the project we decided to create a spatial hull, the triangle mesh that Lisa designed, and use it as a skin for video projection. Interesting questions arose around how to achieve the projection onto the three dimensional object. Should we use projection mapping, should we aim for an independent image, should we skew the image, should there be a beginning and ending point, should there be interruptions by columns of the space, etc.

It soon became clear that white or bright figures on a black background would work best to dissolve the inherent rectangularity of the technical image format. I was thinking of some of the studies I did for the "Miniaturen 15" project, in particular dynamically rearranging text and the processed scans of tree trunks.

{group: trunks}

// Try align two sets of triangles.

i = 0

while (i < numTriNext) {

val triN = triNext(i)

triN.coh = -1

var j = 0

while (j < numTriPrev) {

val triP = triPrev(j)

if (triP.isIncoherent) {

val perm = permutations

var p = 0

while (p < 6) {

val pi = perm(p)

val dx1 = triN.x1 - triP.x(pi._1)

if (dx1 >= -leftTol && dx1 <= rightTol) {

val dx2 = triN.x2 - triP.x(pi._2)

if (dx2 >= -leftTol && dx2 <= rightTol) {

val dx3 = triN.x3 - triP.x(pi._3)

if (dx3 >= -leftTol && dx3 <= rightTol) {

val dy1 = triN.y1 - triP.y(pi._1)

if (dy1 >= -topTol && dy1 <= bottomTol) {

val dy2 = triN.y2 - triP.y(pi._2)

if (dy2 >= -topTol && dy2 <= bottomTol) {

val dy3 = triN.y3 - triP.y(pi._3)

if (dy3 >= -topTol && dy3 <= bottomTol) {

triP.mkCoherent()

triN.coh = j | (p << 28)

p = 6

j = numTriPrev

numMatches += 1

}

}

}

}

}

}

p += 1

}

}

j += 1

}

i += 1

}

{kind: code, group: triangles}

Software visual control interface

{group: software}

// findTriangles

val minTriLenSq = minTriLen * minTriLen

var numTri = 0

var p = 0

while (p < numLines) {

val count = numIntersectionsS(p)

val i = count.index

val numInt = count.count

if (!triTaken(i) && numInt >= 2) {

val intIdx0 = i * maxLines

var intIdx1 = 0

val numIntM = numInt - 1

while (intIdx1 < numIntM) {

val int1 = intersections(intIdx0 + intIdx1)

assert(int1.isValid)

val j = int1.targetIdx

if (!triTaken(j)) {

val a = lines(i)

val b = lines(j)

val ad1 = lineLenPtSq(a.x1, a.y1, int1.x, int1.y)

val ad2 = lineLenPtSq(a.x2, a.y2, int1.x, int1.y)

val bd1 = lineLenPtSq(b.x1, b.y1, int1.x, int1.y)

val bd2 = lineLenPtSq(b.x2, b.y2, int1.x, int1.y)

resizeLine(in = a, out = findTriAuxLine, width = width, height = height, factor = shrink)

val x2 = if (ad1 < ad2) findTriAuxLine.x2 else findTriAuxLine.x1

val y2 = if (ad1 < ad2) findTriAuxLine.y2 else findTriAuxLine.y1

resizeLine(in = b, out = findTriAuxLine, width = width, height = height, factor = shrink)

val x3 = if (bd1 < bd2) findTriAuxLine.x2 else findTriAuxLine.x1

val y3 = if (bd1 < bd2) findTriAuxLine.y2 else findTriAuxLine.y1

if (minTriLen == 0 || (

lineLenPtSq(int1.x, int1.y, x2, y2) >= minTriLenSq &&

lineLenPtSq(x2, y2, x3, y3) >= minTriLenSq &&

lineLenPtSq(x3, y3, int1.x, int1.y) >= minTriLenSq

)) {

val tri = triTemp(numTri)

numTri += 1

tri.x1 = int1.x

tri.y1 = int1.y

tri.x2 = x2

tri.y2 = y2

tri.x3 = x3

tri.y3 = y3

triTaken(i) = true

triTaken(j) = true

// abort loops

intIdx1 = numIntM

}

}

intIdx1 += 1

}

}

p += 1

}

{kind: code, group: triangles}

[hh 05/02/17]

I don't recall exactly how the eventual piece emerged, but at some point I started experimenting with digital image processes that decomposed photos of the esc space into lines and triangles. An interesting algorithm in computer vision is the Hough transform, a "to find imperfect instances of objects within a certain class of shapes" (Wikipedia). It seemed perfectly suited for the topic of the exhibition, and it allowed me to pick up the motif of the triangle mesh deconstruction.

The piece takes real-time snapshots from a surveillance camera mounted in the gallery, sweeping almost 360 degrees to analyse the space. The algorithm then uses a variant of the Hough transform to first decompose the image into lines, followed by a stage that tries to connect these lines to form triangles. A third stage addresses the question of the constancy of objects—it makes hypotheses about whether a triangle in a successive frame is still "a same" triangle as one in the previous frame.

How these procedures translate into actual aesthetic objects was also subject to various changes over time. I began experimentation with high resolution daylight photos from the esc space, and a first version of grouping the lines to triangles. The result are images showing clearly the alignments of the space and yieling an architectural quality. The pre-processing, that is the way the input image was filtered, adjusted in contrast, and fed into an edge detection algorithm, had strong impact on the way the triangles would eventually be formed.

{group: triangles, keywords: [surveillance, Hough algorithm, lines, triangles, space, architecture, edgedetection]}

First variant of triangle analysis based on photographies of the empty space

Sketches with the CERN particle collision data {group: CERN}

Chess board photo for calibrating the camera lens distortion

{group: chess, keywords: [_, camera]}

Camera lens distortion calibration coefficients

{group: distorsion}

After changing to the surveillance camera and understanding how to correct its lens distortion—crucial in order to find straight lines—the whole question of movement in time arose. We decided on an ultra-wide format (3840 x 540 pixels covering the entire oval surface of the mesh), and I introduced a continuous shift in the synthetic image that roughly corresponds with the camera movement, albeit the camera changes rotation direction, sometimes performing the opposite motion of the screen image.

What happened in the space was very interesting. The mesh surface, predictably, transformed the quality of the projected image immensely. After some testing, we settled on a rather simple projection mapping whereby a set of virtual triangles is mapped to the triangles of the mesh, preserving straight line segments on each triangle. The image obtains a new depth, as pixel sizes begin to vary, as brightness varies according to the relative angle of the triangle towards the projector. The clean alignment structure seen with the photographs on a regular screen gives way to a more complex structure, now crawling across the mesh as an entire new topology of the imperfectly reproduced space. The gallery space is not empty any longer, there is a huge structure in its middle, and so at times the camera is inevitable engaged in a feedback situation, at other times it captures an empty part of the room, at other times it looks outside the gallery windows. This results in a rich variation of the density of the projected image, from very sparse constellations to very dense flocks.

{group: camera, keywords: [surveillance, Hough algorithm, lines, triangles, space, architecture, mesh, projection]}

At one point, there was a small glitch in the code that resulted in incomplete erasure of the image background, with triangles extending to the very boundary of the screen leaving behind traces. The trace structure depended highly on the parametrisation of the force-directed animation of the lines. Lines would either appear and disappear, if no preceding or successive coherent triangle was found, or they would by accelerated by this force vector to slide from frame to frame, in this case leaving characteristic curved blotches. The fact that these traces remain still was very interesting to me, giving a counter-point to the expected animation of the graphics. Each iteration, lasting a few minutes, would give rise to one entirely new pattern of traces, and even after watching the process for a very long time, there was always an unforeseen way the interaction between the components of the algorithm would play out. One day, the camera communication had crashed, but the other part of the system was still working, sticking faithfully to the last analysis carried out and producing a strict horizontal lines as the trace pattern, something I had not seen ever before.

The life span of the triangles was another interesting aspect. Whenever I saw that a triangle persisted for a few analysis frames, I caught myself secretly wishing for it to survive as long as possible, almost like one wishes for soap bubbles to persist. I found that other visitors had the same reaction.

{keywords: [_, glitch, iteration, camera, triangles]}

---

meta: true

artwork: Hough

author: HHR

kind: diary

project: ImperfectReconstruction

place: ESC

keywords: [video installation, projection]

---