Skip to content

Unearthing the past: making PCBs for the OTD TKL keyboards

Stay awhile and listen!"

By Gondolindrim

First version published in march 17, 2023

Last revision in march 17, 2023


1 Introduction

You know kids, uncle Gondo has some experience under his belt. I have been into this keyboard thing for ten years now — yes, since 2013.

The story is somewhat contrived, but I first got into keyboards by force of a MMORPG called MU Online which is strong to this day. The gist of the story being, when we got into a clan we started hearing our clan peers talking about customizing their mechanical keyboards; somehow, there were ways to make your in-game performance better by changing the switches to more responsive ones. As it just happened there was this little forum called On The Desk, comprised of neckbeard keyboard enthusiast that spent hours a day searching and building the sweet art of customized mechanical keyboards.

It was then and there that I discovered truly customized mechanical keyboards. Keyboards designed by OTD members; this was unthinkable! A hobbyist keyboard made by hobbyists, for hobbyists? And made entirely of metal? It just happened I was a broke undergrad student crunching coins to make ends meet, and importing keyboards from korea was simply unfeasible, so I had to watch the whole thing unfold while it happened. The OTD line of keyboards was not as streamlined as people think them to be; some of them are called "OTDs" because they were made in the forum for the forum peers, but in fact the keyboards were seldomly not made by the same team. The most famous line of the OTD keyboards, however, the 456GT, 356, 356 Corsa, 356CL and 356.2 were designed by the same group; they even used the same PCB, made by a nowadays-legendary designed called gon. When people mention "OTD keyboards" they are generally referring to these specific ones; thenceforth, in this article, so shall I.

The OTD TKLs would simply shape what the custom mechanical hobby is; in fact, some of the strategies, looks, techniques used back then are still used. The (arguably) most defining keyboard to ever exist, the TGR Jane, was in fact based on the TKL line of OTDs. Being such era-defining pieces of hardware and design, the OTD TKLs have nowadays become true relics of the hobby; some of them range in the thousands of dollars in aftermarket value and a few are worth tens of thousands of dollars and are not even sold publicly, only among refined collectors — I am of course talking about the 356.2, a revisited version of the 356 sold only to the OTD moderation team and a few VIPs. gon himself had the habit of signing some of the PCBs he built for friends, and there are notices of PCBs with his signature being revered as pieces of the mech keyboard hobby history.

As for myself, owning an OTD keyboard became a dream, and gon became somewhat of an idol. I mean it — just like any good idols I have never met the person and never talked to them.


2 gon's last laugh

As it turns out, the korean community would shape what became of the hobby but they would stay somewhat enclosed in their own shenanigans and, as the hobby grew, many korean makers emerged making keyboards that were sold only in korea. Even after a significant worldwide audience was formed, the korean guys kept somewhat aloof to the outside community; owning a korean custom mechanical keyboard — a "kustom" in the community's jargon — in the west was a feat reserved only for harcore enthusiasts. I honestly doubt that the original designers and members of OTD imagined the behemoth that the business of custom mech keyboards would become; most of them, including prominent members, are not active anymore and most, in true korean fashion, simply vanished in a very elegant asian equivalent of the irish exit.

There is just one hair in the soup: it is a funny thing this hobby of ours, because the keyboards we design are hunks of metallic rectangles that can last decades — the electronics inside them... not so much. It was not long before original OTD keyboard owners faced a huge problem: the shortage of supply for PCBs and plates for their keyboards. The files for neither were ever released, and the original designers eventually became inaccessible. This meant that the price of such items (original plates and PCBs, also called "OG parts"), sky-rocketed in the aftermarket, reaching four to five hundred dollars value in the aftermarket today. If you have original OTD parts, be it the plate, PCB or both, you better hold on to them for dear life because they are simply unavailable anymore. Modern OTD owners face a daunting dilemma: you either take absurdly good care of the parts or you simply do not build them. I am one such owner: although owning original OTD parts, mine were never built in fear of breaking them!

gon was not to fade away without his last laugh: many would try to make compatible modernized OTD PCBs and plates, and to this day none succeeded. As far as I am aware, the closest attempts were by Mechlovin' (see reference [1]) and a user in GH called kawasaki161, or kacklappen23, who designed the O87 PCB which Interest check link can be seen in [2]. Both of them reported that the OTD PCB was simply too complex to reverse engineer because of a single hauntingly simple yet compromising factor: the spacing between keys. It is not to Mechlovin's nor kacklappen's demerit; as a matter of fact, Mechlovin' is a damn good PCB designer and kacklappen's O87 PCB did make for a good replacement; yet, both of them failed to make PCBs and plates that were perfectly interchangeable with the OG OTD parts.

There were also other things to consider: the OG OTD PCB, as designed by gon, was designed using a huge (a "chonker" if you will) Motorola microcontroller, with a rather basic, non-reprogrammable specially-coded firmware (TMK was not a thing back then, much less its more famous cousin QMK!) that was very limited due to the now ancient microcontroller used. Not only that, all components were through-hole — that is right kids, you have to solder microcontoller, capacitors, resistors, and diodes all by yourself! Nothing of those surface-mounted pre-factory-soldered components that the cool kids use these days. You even had to solder the cable, think about that!

In modern times we need a PCB with faster microcontroller that supports QMK, VIA and VIAL; also modern-day ESD and EMI protection, as well as using the modern PCB fabrication techniques that the designers back then lacked.


3 The source of the problem

Whenever you hear keyboard enthusiasts talk about a "keyboard unit", sometimes abbreviated "k.u.", "KU", or simply "a unit" and "U", this refers to the size of a single switch (think your letters Q, W, E and numbers 1, 2, 3, and so on). The bigger switches will vary in size; not all keyboards have standardized sizes for their keycaps; the first standardization when it comes to keyboard layouts came with IBM's Model M.

There is also the problem of the switches; ALPS switches use specific keycaps, as do Kailh Chocs, IBM's buckling springs, Cherry's MX switches. The fact that by the 1980s the Cherry MX standard became the most prominent, by far and large, due to their simplicity, variability and reliability; in the 1990s ALPS would retire from the mechanical keyboard switch market as would IBM, handing all their tooling to Unicomp, who manufactures Model M's to this day. With the wide adoption of the Cherry MX switches came the wide adoption of the MX-style keycaps; these keycaps are nicely measured in their respective Keyboard Units.

For the Cherry MX switches, the most basic keycap is the single-unit keycap; the distance between two of these keycaps is three quarters of an inch, or 19.05 millimeters; this means that one of these switches occupies a 19.05-by-19.05 mm real state space on the keyboard PCB. This quantity — 19.05 mm — then defines a KU. With this widespread adoption and the previous standardization of the layouts by the industry with the IBM Model M, this culminated in the two most used keyboard layouts: those made by the Americal National Standards Institute, or ANSI, and the International Organization for Standardization, or ISO. These two organizations defined in the nineties two layouts that are still the defaults, to this day.

Let us take the ANSI layout: the Caps Lock key is exactly 1.75U wide. The spacebar is 6.25U, and all other keycaps in the bottom row are 1.25U. The stabilizer dimensions, as well as their hole sizes and distances, are all very well established in the Cherry MX datasheet [3].

By the two thousands, the Cherry MX patent would fall into public domain, opening the market for the "MX clones" we know today. These clones however did not appear in the Egypt-plague-quantities we see today until the gaming market boomed in the two thousand tens. This cemented the "MX style" as a standard in mechanical keyboards of all uses: office and gaming. Even supermarket cashier machines!

This very short and quick introduction is here to make a very loud affirmation: the huge, and I mean huge majority of keyboards we design in this enthusiast mechanical keyboard are designed using the premise of the MX standards. The keys have pre-set and well-known sizes, switches have well-known sizes, and everything is fine and dandy. Yes, there are some weird outliers like the Alice layout (which needs no introduction) and my own child, the Sagittarius (which unlike the Alice needs introduction here [4]), but for the most part, everything is fine and dandy. The adoption of this standard even allows PCB designers to make PCBs for keyboards they did not design for, like aftermarket or retrocompatible PCBs.

But so is not the case with the OTD PCBs, oh no. For some reason unknown, the designers of the OTD PCBs decided to simply not use the MX spacing standard; the switch spacing on the OTD PCBs is not 19.05 mm. And this makes reverse-engineering much, much more difficult. Because of this, neither Mechlovin' nor kacklappen23 were able to design PCB replacement parts that were compatible with the original OTD parts (the "OG" parts). Mechlovin' did what is in my opinion the most sensible option: designing a plate-PCB pair that, albeit not using whatever spacing the OG pair did, did fit the OTD cases. This, however, leaves a sour taste in OTD owners mouths...


4 Where do we go from here: measuring randomness

So after having spent quite a considerable amount of money into getting an OTD keyboard, I was left with a weird choice. The keyboard has original parts, from the PCB to the plate; yet, with no compatible replacements, I was not sure I would feel comfortable using these pieces. Which is when I set myself to design a PCB and a plate compatible with the OG parts.

The first thing I did was to investigate this issue on the switch placement, distances and sizes. Since I had a pair of chinese knock-off calipers around, I had to measure the PCB so I could see exactly where the issue lies, and I started measuring. Here is the process that I used.

First, I had to make assumptions: the first is that MX stile switches were used. This means that the measurements I took should be equal or close to those we generally use; however, since the switch spacing is not the standard, I could not assume exactly what it dimension was. While we are at it, I also need to admit that there is no uniformity: the switch spacing may vary even among the switches in the PCB.

Before showing any results, I will first preface my analysis with a quick, somewhat in-depth statistical background.

Randomness

Most people generally misinterpret what "random" means. Some variable being random does not mean it can obtain or assume any value; in the field of measure theory, a random variable is a function from a certain set of outcomes in a sample space. We are not going into details on the philosophy of randomness (nor could I!) but the mathematics behind randomness are solid and we will introduce some of those.

For a quick show, imagine the sides of a dice (let's stick with a cube dice here). The chance of obtaining a particular outcome when the dice is thrown is exactly a sixth per number, that is, the faces pertaining to one, two, up until six have the same chance of showing up. This is the most basic definition of randomness: if I throw the dice many, many times, and count how much I got ones, or twos and so on, a sixth of times (approximately) I will see ones, a sixth for twos, and so on.

However, imagine now that I get a biased dice, say, I glue a small weight to one of the faces; it is clear now that the outcomes of the throws are still random (dependent on an outcome) but it is also clear that the odds are now not equal: the "heavy face" will have a smaller chance of showing up.

This becomes more complicated when we talk about continuous variables, say real ones. Imagine a factory making metallic wheels for the autmobile industry. Say your machinery makes 18 inch wheels; it is obvious that no machine can make wheels to perfection, and tolerances are involved. But the car manufacturer to whom you supply tells you they need wheels with at most 0.1 inch variation in diameter, for more or less. The question is, how often does your machine make wheels that are larger than this specification?

If you think about it, the chances of getting exactly any value are always zero. For instance, what is the chance the machinery makes wheels to exactly 18.01 inches? It is the same probability of getting 18.00 inches squared. Hence, when we talk about continuous randomness, it does not make much sense to talk about how likely a particular value will come up; instead, it makes more sense to talk in intervals, that is, how likely are we to get wheels that measure between 18.00 and 18.01. In more technical terms, the chance of observing any particular value is always zero because the measure of the set containing a single number is called a zero-measure set.

If we were to model this "continuous likelihood", our first thought would be this: the machine can produce wheels from a certain value, say \(x_1\) inches, to another value, say \(x_2\), and any values in between have the same likelyhood of being produced. For our example, let us assume our machine makes wheels that are between 18.1 and 17.9 inches in diameter. The mean value \(\mu\), in this case 18 inches, is right in the middle of \(x_1\) and \(x_2\), that is, we are aiming to make wheels with 18 inches; some have less, some have more, but within tolerance they are all somewhat close to 18. This basically means that the Probability Density \(P(x)\) function of this stochastic process is a uniform line, as shown in figure 1.

Figure 1. Plot of an uniform distribution showing extremes x1, x2 and the mean.

This means that the probability of getting a wheel diameter below a certain value \(z\) is called the Cumulative Distribution Function CDF, and calculated integrating \(P(x)\), that is, getting the area under the curve of figure 1 (shown in blue).

\[ F(z) = P\left(x < z\right) = \int_{x_1}^{z} P(t)dt\]

Much the same way, the chance of getting a wheel measuring between a value \(y\) and a value \(z\) is

\[ P\left(y < x < z\right) = \int_{y}^{z} P(t)dt = F(z) - F(y)\]

Of course, the chance of getting a value between \(x_1\) and \(x_2\) is 100% or 1; since our function \(P(x)\) is a fixed value, one can obtain

\[ P\left(x_1 < x < x_2\right) = 1 \Rightarrow \int_{x_1}^{x_2} P(t)dt = 1 \Rightarrow \left(x_2 - x_1\right)P(x) = 1 \Rightarrow P(x) = \dfrac{1}{x_2 - x_1}\]

Therefore the CDF is calculated by

\[ F(z) = \int_{x_1}^{z} P(t)dt = \int_{x_1}^{z} \dfrac{1}{x_2 - x_1} dt = \dfrac{x - x_1}{x_2 - x_1} \]

Therefore, you will conclude that if you were to measure every single wheel that came out of the factory, what would be the chance of getting a wheel measuring, say, between 18.00 and 18.05 inches? That would be

\[ P(18.00 < x < 18.05) = \dfrac{18.05 - 17.9}{18.1 - 17.9} - \dfrac{18.00 - 17.9}{18.1 - 17.9} = 25\% \]

Much the same way, the chance of getting a wheel between \(18.05\) and \(18.10\) would also be 25%. And now you wonder: is this really true? If the machine is tuned to make 18 inch wheels, the the chance of getting values closer to 18 should be more likely, and this starts getting confusing.

The Central Limit Theorem

While the uniform probability density doses model continuous randomness in an intuitive way, it is not a good model of randomness. Most stochastic processes are not uniform; these process generally follow a probability density called the standard or gaussian distribution:

\[ P\left(x\right) = \dfrac{1}{\sigma\sqrt{2\pi}} e^\left(-\dfrac{\left(x - \mu\right)^2}{2\sigma^2}\right) \]

Where \(\mu\) is the mean and \(\sigma\) is called the standard deviation. This function is known for its bell shape, as shown below.

Figure 3. Plot of a Standard Probability Density Function.

Ok, now this makes much, much more sense. Yes, now the values closer to \(\mu\) should be observed more frequently. The matter of fact is... How do I prove that, in the case of the wheel factory, this distribution is a good model for the randomness of the produced wheels? Well, you can literally prove it.

Theorem

Lindemberg-Lévy Central Limit Theorem. Suppose a sequence of independent and identically distributed variables in the real continuum \(X = \left\{x_1, x_2, ..., x_n\right\}\) such that the expectation \(E\left[X\right] = \mu\) and variance \(Var\left[X\right] = \sigma^2 < \infty\). Then as \(n\to\infty\), the random variables \(\sqrt{n}\left(\overline{x}-\mu\right)\) converge to a normal distribution, that is, a standard distribution of mean zero and unitary standard deviation.

The Central Limit Theorem seems complicated (and it is!) so I am going to break it down. Whenever we have a collection of real random independent and identically distributed variables (that is, measurements randomnly produced in a way that each observation does not depend on the value of other observations), if the sample size is big enough, the the probability distribution of the observations is closely approximated by a normal distribution of mean \(\mu\) and standard deviation \(\sigma\), calculated as:

\[ \mu = \dfrac{\sum\limits_{i=1}^{n} x_i}{n} \]
\[ \sigma = \sqrt{\dfrac{\sum\limits_{i=1}^{n} \left(x_i- \mu\right)^2}{n-1} } \]

More technically, as this process is taken infinitely, that is, as more and more measurements are taken, the approximation gets better and better. I probably skipped several hours of mathematics and statistics here, but what we need to understand here is as follows:

  • A measurement process that produces observations which results are not correlated will unevitably tend to a standard distribution; of course, this also applies to the OTD PCBs we want to measure;
  • As more and more measurements are taken, the distribution becomes more and more close to a standard distributiuon which parameters (mean \(\mu\) and standard deviation \(\sigma\)) can be calculated;
  • Through this process we can obtain very important results about the measurements taken, two of which are of utmost importance: the mean \(\mu\) tells us the "target value" of the measurements, while \(\sigma\) tells us how close the measurements are to this \(\mu\). A big \(\sigma\) means that the measurements are all very scattered, and the bell is very thick and low; a small \(\sigma\) means that the measurements are more clumped together and the bell is much taller and thinner.

Classifying observations

For the standard distribution, the CDF is calculated is is the definition; since \(P(x)\) is defined for all reals, the integral should start from minus infinity:

\[ F(z) = P\left(x < z\right) = \int_{-\infty}^{z} P(t)dt = \int_{-\infty}^{z} \dfrac{1}{\sigma\sqrt{2\pi}} e^{\left(-\dfrac{\left(t-\mu\right)^2}{2\sigma^2}\right)} dt\]

Which is, of course, the area under the graph of \(P(x)\). This corresponds to the red-shaded area in figure 3. The problem with this function is that it does not have an analytical (closed) form; it is a function and it exists, but it cannot be written as concatenations of more elementary functions. Because of this, this integral is sometimes called the "Error Function", abbreviated \(\text{erf}(z)\), and the only way to obtain it is numerically.

Figure 3. Graph showing the Cumulative Distribution Function of a Standard Probability Density Function.

Following this, we need to understand how to classify observations. Take a Standard Probability Density of mean \(\mu\) and standard deviation \(\sigma\). The question becomes: if I were to pick any measurement, how like is it to obtain a value that deviates from the mean, by, say, \(\sigma\)? And by \(2\sigma\)? Or the other way around: what deviation from the mean gives me a probability of, say, 95%, that is, what is the range of measurements that has a 95% likelihood?

In other words, I want to define what is a Confidence Interval, that is, the range of estimates based on a Confidence Level (sometimes called a Degree of Confidence). Take \(\alpha\) as this Confidence Level. I want to know two values, \(a\) and \(b\) such that \(P(a < x < b) = \alpha\). Because the Standard Density function is simmetrical around its mean, this interval is always going to be of the form \(\left(\mu-x,\mu+x\right)\), and the Confidence Level can be written as the equation in figure 4. To every Confidence Level \(\alpha\in\left(0,1\right)\) corresponds some \(x>0\) such that the Confidence Interval of \(\alpha\) is \(\left(\mu-x,\mu+x\right)\).

Figure 4. Graph showing the Confidence Interval and Confidence Level of a Standard Probability Density Function.

Why is this important, you say? Because a Confidence Level allows us to estimate an interval in which the measurements are most likely to be found. For instance, let us take again our wheel factory example. Let us take a Confidence Level of 0.95 or 95%. After making lots of measurements, obtaining the mean and standard deviation of these measurements, we arrive at the conclusion that the Confidence Interval of the 0.95 Confidence Level is \((17.95,18.05)\). This is wonderful news, because you need the manufactured wheels to be between 17.9 and 18.1 inches; at the same time, you are 95% likely to manufacture wheels in \((17.95,18.05)\)! However, if your Confidence Interval were, say, \((17.8,18.2)\) then this is a major problem because a great portion of the manufactured wheels are out of specification.

Given the importance of the Confidence Level and Confidence Interval indexes, there is another parameter that needs to be known: the z-index. Again exploring the simmetry of the Standard Probability Distribution, we know that to every confidence level \(\alpha\) there corresponds a deviation \(x\) such that the Confidence Interval of \(\alpha\) is \(\left(\mu-x,\mu+x\right)\). The number \(x\) is called a deviation: basically if you give me a Confidence Level \(\alpha\), then if you randomly pick an observation in your set of measurements, there is an approximate chance of \(\alpha\) that the picked measurement will be in the Confidence Interval of \(\alpha\). This means that the number \(x\) is a loose measure of how much you need to deviate from the mean \(\mu\) to get a chance of \(\alpha\). We can prove that the quantity \(z\) defined as

\[ z = \dfrac{x-\mu}{\sigma} \]

Is also a random variable which Probablity Density is the Standard Density of mean \(0\) and standard deviation \(1\). This means that the index \(z\) allows us to make the same analysis for any Standard Deviation; we only need to normalize \(x\) and we get an index that relates back to a standardized density. The Standard Probability Density of mean \(0\) and deviation \(1\) is called a Normal Probability Density, and the number \(z\) is called a z-index: it gives us an insight of how much a given measurement \(x\) deviates from the mean, normalized by \(\sigma\). The biggest importance of this z-index is that it allows us to classify the observation set and its measurements: when dealing with randomized measurements, it is very common to divide the measurements into three categories, "normals", "outliers" and "anomalies".

  • "Normals" are the most common measurement interval. These are naturally the most occurring and dictate what the "average" process is;
  • "Outliers" are measurements that differ significantly from what you would expect, but are still reasonable. That is, while not common are still within a certain Confidence Level range. Outliers are generally attributed in random variances in the manufacturing or the measurement process. In other words, stuff can happen during the manufacturing of the PCB (maybe the factory was using an overheated machine or a dull drill?) or the measurement process (maybe I did not use the calipers properly when measuring?). Outliers are expected in any statistical population;
  • "Anomalies" fall outside of a reasonable Confidence Level, that is: the occurence of these measurements raises eyebrows and tells us something is going on — since the chance of these measurements happening is so small, the fact they did appear is concerning and tells us some type of external bias might be acting upon the measurements, skewing them.

The rules for classifying these sets are empirical and depend on the author of the study. In this analysis we will use the "68-95-99", or "three-sigma" rule. In this rule, "normal" measurements are those that fall into the Confidence Level of 95%; outliers are within 95 and 99% confidence, and anomalies are above 99%. This rule is because the Confidence Level of the interval \(\left(\mu-\sigma,\mu+\sigma\right)\) is 68.27%, the confidence level of the interval \(\left(\mu-2\sigma,\mu+2\sigma\right)\) is 95.45% and for the interval\(\left(\mu-3\sigma,\mu+3\sigma\right)\) is 99.73%. This means that:

  • Normal measurements have a z-index below 2, and upon picking a random measurements of your set there is an approximate chance of 95.5% that the random pick is in this range;
  • Outliers are above 2 and below 3, meaning that upon a random pick, the chance of the picked measurement being an outlier is approximately 1.3%;
  • Anomalies have a z-index above 3. A random pick has a chance of roughly 0.3% of being an anomaly.

It is important to note that this classification depends greatly on the field of study and the precision of the results. For instance, in some technical fields like physics and engineering it is common to consider a discovery or breakthrough results that fall outside of the five sigma range — that is a Confidence Level of 99.99994%!

The gist of classifying measurements is to identify patterns and find possible biases on the set of measurements. For instance, if a set of measurements has too many outliers or anomalies, this could be explained by those particular measurements being forced in some way, that is, not independent.


5 Finally to the measurements

Before measuring anything in the OTD PCB, I needed to lay down my working plans. I took a pair of chinese knock-off calipers I had laying around and started measuring away; I found many interesting things, but the three most interesting were:

  • The PCBs indeed used different sizes for the switch spacings, but not only that, they also used different sizes for the switch holes;
  • The switch spacings were also not uniform, they varied in specific places of the PCBs!

So now I knwew I had to approach this is an entirely different way. I could have no assumptions, even on hole sizes and any spacings. So what I did was:

  • First I bought a good pair of digital Mitutoyo calipers. These were paid for by the AcheronProject contributors in the AcheronProject OpenCollective [6], so thank you guys for this!
  • Second, I measured the switch holes, all three of them: central hole, left and right one;
  • Third, for every switch I measured the distance from the switch center hole to the center hole of the left and right switches.

Reference [5] shows the measurements I took. Figure 5 shows a schematic of the measurements.

Figure 5. Graph showing the Confidence Interval and Confidence Level of a Standard Probability Density Function.

Using these measurements, we can estimate the distance between the center of two switches as

The results showed that, indeed, the situation was grimmer than I expected. The spacing between the keys calculated was not the default 19.05, but even worse, the sizes of the switch holes were also not the same as the MX datasheet. Most importantly, however, is that the switch spacing is not uniform across the PCB; it varies in certain places. In order to understand this conclusion, however, we need to first understand the data processing that was done.

Analysis of the obtained measurements

Analysis of the data in [5] shows that there are no outliers in the measurements (meaning that the sample group is very good) and that anomalies are clustered in two groups, which I called anomalies type 1 and 2. Type 1 are anomalies which normalized deviation is between 2.33 and 10; type 2 anomalies have a normalized deviation greater than 10. These values of 10 and 20 were adopted upon inspection at a population graph, that is, I graphed the intervals against the number of times they happened and the clusters are immediately seen. In [5], type 1 anomalies are colored red and type 2 anomalies in blue. The analysis of the anomalies show that both type 1 and 2 anomalies happen in very specific places:

  • Type 1 anomalies occur in the first, second, third and fifth rows, between the alpha cluster and the navigation cluster (between F12 and Print, Backspace and Insert, slash and Delete, right Control and the left arrow);
  • Type 2 anomalies are shown in the extremes of the fourth and fifth rows, that is: Caps Lock, Left Shift and Left Control are much, much closer to the left PCB edge as they should be; much the same way, Enter and Right Shift are much farther apart from their clusters.

The fact that these anomalies are so sharply clustered and that they happen at very specific places in the layout means that the only possible explanation for their appearance is that an external bias was added to them; the only possible conclusion is that they were added there not by a defect in manufacturing process (like tolerances in the PCB drilling and milling), but by the design itself. Interestingly, the data processing shows that no outliers were found; this can only indicate a good manufacturing process and an accurate measuring process.

References

[1] Official Statement: Rouge (OTD Compatible) Group Buy. Available at this link. Last accessed may 17, 2022.

[2] Interest Check for Modernized OTD PCBs and reproduction plates . Available at this link. Last accessed may 17, 2022.

[3] Cherry MX switches Datasheet. Avaibable at this link. Last accessed august 22, 2023.

[4] Sagittarius keyboard Interest Check. Available at this link. Last accesses august 23, 2023.

[5] OTD360 PCB measurements. Available at this link. Last accessed august 23, 2023.

[6] AcheronProject Open Collective. Available at this link. Last accesses august 23, 2023.