Tuesday, December 20

You do not exactly want a relational database

SQL is touted as a relational database, with some historic infighting because of "relational" issues.

But mostly you do not want a relational database.

A "true" relational database would, in SQL, be like if all SELECT statements were really SELECT DISTINCT statements. (This is based on the mathematical definition of a "relation" which is where we get things like outer joins from. Of course, if we instead redefine "relational" to be whatever arbitrary collection of concepts got included in some particular SQL implementation or SQL standard there's no changes from that particular choice which remain.)

So, for example, if you were tracking money (or food, or really anything), and you happened to have multiple rows with the same value, some of that would vanish from your record keeping.

I think the fundamental structure of a database table is a named collection of columns (where each column is a sequence of the same length and each value in a column is the same type - stored the same way and interpreted using the same mechanics).

Though, for databases optimized for things like retail sales, there's a long history of pushing that structure down into the row and implementing the table as a collection of rows (with some waffling about the sequential aspect of these rows - they really are still a sequence, but the traditional database implementation reorders things rather often).

Monday, August 29

Business Case Issues for Semiconductor Printers

(This is not a formal business case, but preliminary thinking that might be useful for building a business case.)

Intuitively, Semiconductor Printers (probably mostly based on silicon wafers and molecular beam epitaxy, but other approaches can be viable for some purposes) seems like it should "fit" in today's economy. The underlying techniques are old, and proven (and a first generation of patents has expired). We've a variety of systemic issues where this approach seems like it could be fruitful. And, making this work is just hard enough that the prices on these systems is extremely high - ripe for disruptive harvesting.

But why?

#1) Entertainment value.

People like to build things. Construction games have historically been steady money makers, and there's a measurable fraction of our population (perhaps 16%?) who just enjoy making things.

#2) STEM Education

There's a lot of schlock information out there, and to distinguish between the meaningful and the trash you need to have some practical experiences. Done right, these printers can teach people some important things about physics and engineering which will later be useful in addressing real needs.

#3) Business Failures

Most businesses fail. People building businesses often need to cut corners for a variety of reasons. Entire lines of semiconductors can suddenly become not available or only available under adverse conditions. In many cases these issues can be solved through negotiation and/or finding other approaches. But having another fallback can help extend negotiation timetables and can sometimes be that "other approach".

#4) Tool building

Currently, there is a lot of red tape and cost involved in designing and building new semiconductor devices (unless you happen to be associated with the right people - who are often in some other country). We can change that.


Semiconductors (and related things: conductors and insulators) include things like sensors and things that make different kinds of light. The underlying fabrication process might also be used for making "nano-scale devices" - perhaps useful for fabrics or medical or biological work.

Finally, note that a major cost and design issue is the vacuum chamber. High vacuum chambers are readily available already, but getting power and materials into them can be a significant challenge. This all changes for space based systems, where vacuum is a "natural resource". This points at a variety of future possibilities and the people who understand how to make these things work well will have some advantages in getting space based computer manufacturing ready for market.

Monday, July 11

Printing Semiconductors

https://en.wikipedia.org/wiki/Molecular_beam_epitaxy

Our economy depends not only on mass scale popularity driven production, public education, solid work ethics and a healthy respect for experimentation but also on hobbiest activities where the future designers and decision makers learn from experience how things can actually be made to work.

So that, I think, means I should be leveraging the "3d printing" technology efforts and getting into semiconductor printing. Most of the relevant patents there have now expired, and there's quite a lot of solid precursor work dating from the early part of the 20th century.

Basically, it's taking the technologies behind the television tube (and other technologies - scanning electron microscopes, mass spectrometers) and using them to develop hobby scale circuitry. Or, that is my hope. (Now let's see if I can motivate myself enough to carry this through the obvious problems...)

Inspirations include:

Ben Krasnow's home built electron microscope

Charles Moore's OKAD vlsi design system

And hopefully I can get enough done to engage interested people at nova-labs

So, first off, that electron microscope is possibly simpler than what an initial build of the printer would be:

First, you need to have some way of isolating problems. For a reprap printer, the human eyeball is good enough, but for things like semiconductors you need something that can see on the microscopic scale, and the electron microscope is a plausible first approximation for what you need there.

Second, you need to be boiling off the materials that you are going to be depositing, and that means things that will need to be replaced over time.

And, of course, there's the need for the semiconductor blanks to be printed on. Almost certainly this will be silicon for any first efforts. So that's another cost item and something else where things can go wrong. (How do I check for problems on the surface of the blanks? If there are fingerprints, or other defects, how do I find a part of the surface which can still be used? Is there some way of cleaning or polishing away minor defects? What would that take?)

For dopants, I imagine initial efforts would focus on boron (p-type) and phosphorus and arsenic (n-type), and since I really don't know what I'm doing here, I'll guess that I'll need a vacuum of 10e-8 torr (though maybe that is too taxing for an initial effort) and dopant concentrations between 1% and 0.001%. (But what temperatures do I need, for these things? What voltages? How should focusing work?)

Also, reading various references, the temperatures involved for the wafer itself might range quite widely.  https://www.google.com/patents/US3615931 suggests 450..650 C for a variety of semiconductor substrate types, but another reference suggested temperatures as high as 850C..900C. And the melting point of silicon is slightly above 1400C.

Meanwhile, http://lase.ece.utexas.edu/mbe.php suggests that tightly controlled temperature ranges are important for repeatable results (which, in turn, are important for the results of destructive testing to be useful). [There's also a considerable body of knowledge to absorb, relating to various models of how semiconductors form and perform - but I think that having some practical examples to work with will help significantly in understanding that material. In fact, that's a significant part of the point of this exercise.]

That said: repeatability is *not* a priority for the initial implementation. For the initial implementation we need a relatively minimal goal. The point of building this thing in the first place is to get practical experience, turning it into something that can be useful of others is going to need that experience.

So, ok, honestly, the first thing to try to do is get a diode working. This is the absolute simplest thing that can demonstrate viability. (How do I test that I am putting it together right? How do I affix electrodes onto the result?)

After that would be a transistor, then perhaps a logic gate and then a binary counter. Another path gets into analog amplification. And if I can even get close to 1970's integrated circuit capabilities, I'll be happy that I understand those fundamental concepts.

For a source of interested people, I'm thinking makerspace community, educational community and maybe eventually business people (but ramping things up for popular use would require engagement with people who are very used to doing other things and with priorities which conflict with my own, so that's something for much later).

I spoke with one person at nova-labs, and he suggested I might need an x-ray license. So I spent some time trying to research virginia and federal (nuclear regulatory commission) regulations, and I currently think that I will not need a license. First off, though, I do not have enough specifics to even know what would be needed. But I am not planning on using this on people, and I'm not going to even approach 1e6 electron volts.

Though, also, there's a bit of an assumption there (I'm going to be working with volts, maybe 5 KV, and I think the implication with the electron volt system of measurement is that we are talking about the amount of energy imparted to one particle - so that's a mix of temperature and voltage and whatever else, and it all gets a bit fuzzy how much exactly is involved, especially when you start thinking about the energy which exists as mass - in principle though, the electron volt regulations are about difference from rest state, and are not about regulating mass itself).

Anyways... as long as the energies are low enough, no licensing should be needed. If licensing does wind up being needed, because of potential radiation hazards then I guess I would need to figure out which licenses would be needed from which agency. But, for now, this is looking more like a hand waving issue than a real issue.

Finally, there's the issue of how to install the connector pins that we traditionally use to hook up with integrated circuits. Conceptually, we might be able to just have some pads which we connect to, and that is something to play with. But "soldering" titanium wires using (for example) molybdenum deposition might be the place to start. This will apparently destroy the crystal structure underneath it (where it deposits), and it seems like perhaps something which has a cooler vapor phase than molybdenum might be the right place to start (lead? tin+lead? tin? ... probably tin, but apparently since lead atoms are so big they can be a carrier for a doping material and the lead will just sit on the surface and not become a part of the crystal...)

In fact, this is such a basic issue that probably the right place to start is first hooking two wires to a silicon wafer and verifying that they can conduct and that the thing is mechanically sound. Once that is doable, try making a semiconductor resistor. And only once that is possible does it make sense to try to make a diode. (And, then, after that: a transistor, some logic gates, a flip/flop, a counter, and be reviewing to make sure the techniques are documented, with some stream-of-thought blogging for things that might be useful memory jogs, later. Somewhere in this, repeatability and, thus, temperature control and high vacuum starts mattering. And, hypothetically, after this has been done it would make sense to bring in design tools, such as OKAD.)

(fifth draft of this page)



Tuesday, June 14

USB Mouse

For various reasons, I have been thinking about building a USB mouse.

This is tentative. I am not yet sure if I have the motivation to carry through on this, and I am uncertain about obtaining adequate switches and sensors.

Hypothetically, though, this is doable:

Mouse CPU would be a greenarrays F18 for cost and power reasons. $20-ish, last I looked (but prices change in either direction).

Case would be "3d printed" and then smoothed. If I can't do that on my own, I'd visit the local makerspace (NOVA Labs).

The initial idea would be to support standard mouse protocol and keyboard protocol. The primary use would be for gaming, and I'd want some useful fraction of a keyboard available on the mouse itself. Other mice do this already.

(I might have to get into non-standard protocols and specialized drivers, but not in the initial implementation.)

The other thing is that I would need to be able to isolate problems, so I would need some kind of display when keys are pressed/released so that I can distinguish between problems in the mouse itself (like, problems with the switches) and problems in the computer it is hooked up to.

Obviously, this would take quite some time to build. Time that I could instead spend playing games. Does that make sense economically? (Pro-tip: whenever someone uses an economic argument they probably do not know what they are talking about. Economics seems to be a mix of habits, peer pressure, religion, politics, laziness and pure old-fashioned bs.)

Anyways, this would take quite some time to build, so the real question is: am I motivated enough to expend that effort?

For starters: where can I find the sensors? For an optical mouse, you have a mini-camera (much less resolution, and cheaper, than a regular camera) taking pictures of the surface the mouse is on hundreds or thousands of times per second, and then some software to convert those images into information about position changes which then get packetized and sent to the computer. The trick is finding something that (a) I can buy (or make - but somehow semiconductor fabrication is not available at the hobbiest level anywhere I can find, which is silly given how important the related skills and physics are), and (b) that I can understand [find adequate documentation on] well enough to use.

Here are some sample spec sheets (and I hope that the link remain stable and active and do not fall prey to the type of manager who likes to destroy this kind of thing):

https://www.sparkfun.com/products/retired/12907
http://www.bidouille.org/files/hack/mousecam/ADNS2051.pdf

The biggest problem, right now, is that for example nothing is in stock at digikey.



Tuesday, May 31

"Standards"

"The nice thing about standards is that there are so many to choose from."

Standards can be a blessing, or a curse, or both. They are an outgrowth of a kind of problem that occurs when dealing with people.

Basically: when dealing with people you need some things which do not change so that you can deal with the "important stuff" (which itself varies depending on who and what you are dealing with). And  "standards" are the stuff that is held still.

Put differently: standards are useful at the interfaces between where one person is doing work and another person is doing work.

Also: standards that have been "designed" tend to be rather useless - what you want are standards which have evolved for dealing with problems similar to what you are dealing with. (Sometimes this includes "designed standards" but usually when that happens only a small part of the design is relevant. Or, at least, that has been my experience in the context of computing. This says something sad about the usefulness of a lot of people hours. But it also says something about how you should expect to be working if you want to get something useful done.)

Meanwhile: discussions about standards can become rather acrimonious. In my experience, this tends to be a mix of personality flaws in people holding the discussions and irrelevant wasted motion in the standards themselves.

Related is probably the cliche'd concept of "if you want something done right, you'll need to do it yourself" and its close [imperative] relative "take some responsibility". People tend to be frustrating to deal with, but that is often as much a fault of the person getting frustrated as it is a fault of the people they are getting frustrated with. Remedies which ignore either side of this kind of problem tend to fail.

It's often best to try to fail early so you can learn from your mistakes. But ...

War and China

The top 15 most populated countries on the planet are:

1) China
2) India
3) United States
4) Indonesia
5) Brazil
6) Pakistan
7) Nigeria
8) Bangladesh
9) Russia
10) Japan
11) Mexico
12) Philippines
13) Ethiopia
14) Vietnam
15) Egypt

Most computer hardware is made by China - the laws, regulations and traditions of the other countries are structured to favor this process.

Meanwhile, though, it seems to be extraordinarily difficult to get certain kinds of information out of China. We have a general idea of the size of its population (between 1.3 billion and 1.4 billion). We have a general idea of the life expectancy of this population (about 75 years), but it's difficult to get information about causes of death. Actually, it's difficult to get much of any good information out of China. I sometimes wonder if Chinese officials even know. But if we assume that these numbers are correct, we should expect that about 18 million Chinese die every year.

But what is killing them?

Historically speaking, we see a lot of disease coming out of China. I seem to recall reading reports that our annual flu epidemics originate in China (though I have also read about some coming from India). But, also, apparently the bubonic plague originated there. This kind of thinking leads me to speculate a lot about the nature of Chinese philosophy and morals.

But this also leads me to speculate a lot about my own background as a "computer professional". Seriously, what good have my efforts done for anyone?

Not much, I'm afraid...

But why did I write this?

Well... the USA loses tens of thousands of people each year to the flu. Japan (which has better overall health care than the USA - life expectancy is longer in Japan) loses hundreds of thousands of people each year to the flu. And, once upon a time, I remember hearing that flu typically comes from China (we get a new version each year). But I have a vague memory that maybe it sometimes comes from India. But when I try looking that up, on the internet? I can't find any discussions of the topic.

This suggests, if nothing else, that we do not have free speech on the internet. We have free bs, but that is not quite the same thing. For something this big to be hushed up - hundreds of thousands of deaths each year in one country - millions each year when you extrapolate that to other countries (and who knows how bad it is at the source) to be a topic which there is not available information on - there have to be active efforts going on to distract people from the topic.

Perhaps fear of war (where, also, most of the deaths historically have been through disease but I'm not sure that the deaths have ever been on this scale as a sustained thing  - something like WWII probably this flu death rate for the five years it ran, but without actual numbers for flu mortality rates even that is a guess which might not be true).

Anyways: if this kind of thing is "not war" that can only be because this kind of thing is "worse than war".

Meanwhile, most all fundamental computer fabrication has been moved to China. The reasons for this are involved, but it's not entirely unrelated.

Sunday, April 3

CS for All...

I commented on https://computinged.wordpress.com/2016/04/01/we-need-to-better-justify-cs-for-all/ and on reflection, I am not all that happy with the response I posted there.

There's some significant work needed to flesh out what we mean by "Computer Science for All", and I feel I could have done much better.

One issue, of course, for the computing education blog is: what does the curriculum look like? And there's just so much to cover.

But let's consider English literacy, and how we teach that: we do not cover the "great works" when teaching the language. We do not even cover more than a tiny fraction of the dictionary. Instead, we try and cover the "basics", and we keep going over those, layer upon layer.

Meanwhile, some significant number of students drop out. Students in inner city schools tend to wind up with a significantly different understanding of the language than students in farm schools or students in prep schools. And a part of this has to do with their social life and their relations with the other students. And another aspect has to do with the student's interests and the problems they see needing to be addressed in their life outside the school. And these obviously vary considerably.

So... back to "CS for All".  What are the basics?

As the first few passes, I'd say:
  • binary number system (perhaps also octal and hexadecimal)
  • ASCII character system (perhaps also getting slightly into Unicode).
  • Sequences (of numbers, of characters, of sequences).
  • Searching (needs sequences)
  • Sorting (needs sequences)
  • Files and Directories
  • Programs and Processes
  • Touch Typing  (qwerty)
  • The basics of the internet protocol - enough to understand machine addresses and network delays.
  • pixels and images
  • basic audio (d2a, a2d, wav file format, oscilloscopes)
This is obviously very rough, and leaves out the wondrous complexities of html, online gaming, email, office software, and so on. And at first blush you might even think that I'm leaving out programming or cramming that all into "Programs". But, I'm not. I think programming comes in to some degree as a part of the treatment of these topics.

Also, this would not be all taught in one class, but would be the desired end point after years of elementary school and high school instruction. And a part of the emphasis should be on "learning by doing" because trying to get students to absorb all of this on a purely theoretical basis is just asking for problems.

More specifically, when we place an emphasis on programming as an isolated concept we lose all the motivation for why someone would want to learn such a thing. But, also, I think that some amount of programming should be brought in when teaching the above. And, perhaps, some actual formal programming instruction (syntax, for example) should be included.

And obviously there's room for improvement here, as well as room for making this all boring and superficial. 

Still, when I think about computer literacy, the above topics are - roughly speaking - the topics I assume a literate person would understand.

Of course, that leaves us with a huge problem: most of our teachers do not have the background for this. So, how do we get from here to there?

It seems to me that this will require teachers adopt a different role. Instead of being subject matter experts, they should be instruction experts. It seems to me that the job of the teacher should become teaching study skills to the student, and that we need curriculum designed to walk the student through the specifics of what they will be learning. We should stop demanding that teachers must be masters of everything and let them instead focus on what they do best: education as a specialty. (It is quite true that some subject matter expertise can help in relating the subject to the student, but it's also quite true that the instructors will pick this up some of this as they go along. This will be especially true for pre-requisite subjects which are covered frequently.)

This does leave us still with some problems, one of which is designing curriculum that satisfies these needs. But we have plenty of subject matter experts - we just need to put some effort into closing the loop (and some care in dealing with the problems which of course always arise when dealing with large numbers of people).