Saturday, February 22, 2014

Some programming books to read if you want to get into embedded systems.



The classic. The bible. It's the SICP of imperative languages. It's pretty much the only thing all that software engineers can collectively agree is an essential and awesome read. And believe me, getting programmers to agree on something is no simple feat. If you've ever wanted to see a real-life internet argument, get a couple programmers in the room and ask them what's the best way to compute the Fibonacci sequence, get some popcorn, and get ready for fireworks.

In a mere 270 pages, this book will dive into imperative programming, the Unix environment, and even a bit of computer science. It's a vital introduction to C, and also a great book for learning imperative programming in general. Despite what embedded C++ advocates tell you, or what the technologists over at Rust and D and Go all promise, C *is* the language of embedded software, and as such you must be well-versed in her intricacies.

"But muh assembly" you cry, like a rotund child begging for another donut, as his gut slowly protrudes its way beyond the vesicles of his striped polo. Look. There is a time and place for everything, including going down into the assembly level. We've all been there, and it's not pretty. Machine code is a cold, ugly beast that is best left in its slumbers, disturbed only by the ding of a compiler, execrating the object code that it may interpret henceforth.


When doing systems programming for major operating systems, only knowing libc is enough. If you've ever written board support drivers, provided you've recovered from your PTSD, you know it's a whole different game.

It's like going to hell and back. Everything is beautiful inside of x86. It's almost adorable how effortless a printf flawlessly sends out a sequence of characters to your terminal; how freely they flutter their golden trimmed %-formatted tips leeward to your screen. There's little pthreads playing "ring around the semaphore" in the fields of virtual memory, and hearty mallocs over by the red-black tree forests, harvesting the fruits of process' labor, into cozy, petite heaps. Unistd is always a grump, but nobody messes with him too much these days.

But the world of embedded control is a strange, hellish world. It's a world where there's no such thing as mallocs, and a printf is nothing more than a stream of data judiciously churning out of an rs-232 transducer, like a tarpit churning out sludge, except the tarpit has a higher baud rate. Thankfully, even in hell there can be a savior. Given how simple libc is, you might expect implementing the C library to be a simple and annoying chore. And in truth, provided you're working with an architecture that was Godless enough to give you absolutely no libc whatsoever, this isn't too bad. But there's a question of doing it right, and understanding the intricacies of the lib. Why assert is a macro. Why NULL isn't just 0, but 0 cast as a void pointer. Why fflush should be used sparingly at best. This book is here to answer those intricacies, and ultimately, help you build or extend your C library as need be. When was the last time you actually received decent documentation for your embedded C environment?



In the final of the trilogy of C books comes one that you'll keep coming back to long after you've learned the ropes, I give you... a fish.

Well that's what I call this book. "The fish book." I'm not exactly sure what a fish has to do with being an expert C programmer, and I'm pretty sure the author doesn't either. Perhaps it's a grand metaphor for programming in C in general. You are a fish weaving through all of the potential errors that could arise that the compiler won't catch. Daintly programming the trickiest of the tricky buffer to struct castings, throwing up alignment macros like thugs throw up gang signs, firing off float to int bit hacks left and right without a second thought. Ah yes, the C programmer. He swims like a fish and... splashes like a fish, I suppose. Hashtag deep bro.

The book assumes you've already had quite a bit of C programming under your belt, and is here to answer all of those strange questions you might have had during exploration with the language. It's full of little teasers and puzzles specifically designed to help you think better with regards to writing C programs and gain a deeper appreciation of the language in general. It's definitely a read I'd recommend for anybody, but an essential for C programmers. Also, I guess it's better than my undergrad CS 101 book, which was a picture of the side of the author's payroll office.



This O'Reilly book is about how to actually design software for an embedded system in such a way that it doesn't completely suck. Think of this as software engineering techniques for the embedded environment. This read to me is pretty much mandatory, and I won't even let an intern touch trunk code until they read this book cover to cover. Obviously it's just an outline of design techniques, but most programmers are used to the beautiful lands of the PC, and being an embedded programmer requires a different way of thinking.



I have a confession to make. To be honest it's not much of a confession, but EE's still love to smother it in my face, like I'm a dog that mistakenly pissed on the floor. And that mistake was apparently studying Computer Science. I am not a hardware guy. My bachelor's was in Computer Science, not electrical or computer engineering. As such, when it comes to laying out boards or designing schematics, my first question is "why is that capacitor there?" and my second is "what is a capacitor?"

Okay, I know what a capacitor is. I've done my fair share of circuits, electricity, and magnetism, but I really only have a cursory knowledge of hardware, this book helped a lot in introducing to me the hardware aspect of embedded control without getting too crazy. It explains things in a way that's easy to understand from a conceptual point of view, with the obvious intent of being able to ask an EE a question without sounding completely retarded.



Today's embedded software is increasingly becoming more complex. It used to be our firmware would talk directly to the board, and that was it. But nowadays we are starting to see the familiar layers of abstraction we expect in PCs, including tiny operating systems. MicroC/OS is pretty much the de facto standard Real Time Operating System (RTOS) that all others are in some way or form based off of, so reading it is essential.


This book gives you a lot of basic code for interfacing with common devices, such as analogue sensors and motors, UARTs, ethernet, USB, CAN, etc. Highly recommend read, and great reference for developing BSPs and drivers. Also, just look at it. It's a fucking circuit board flying through space. How awesome is that?



Sure you say, I'm only putting it on here for fun. Well, kind of. This book is compendium of a number of useful bit hacks and whatnot, which can and often are useful in the world of embedded systems.

Monday, September 9, 2013

Relational Algebra

Conversation between a friend and I:

Seth: You finished databases?

Me: No, I finished algorithms. I'm working on databases right now. I wonder if the instructor allows variable assignments. Seriously look at this shit. It gets ridiculous.

Seth: ...What symbol is that?

Me: Which symbol?

Seth: ...All of them.

Me: That one's capital Π. It represents the projection operation.

Seth: It looks like Egyptian hieroglyphics to me.

It's like Christmas up in Here

>Come home from work
>See this

A wild N64 has appeared! Thanks roommates!

This is going to help complete our media center. The plan is to make a Raspberry Pi into a console emulator, but for the obvious need of authenticity (coupled with nostalgia) we now have a working N64 and NES!

Thursday, September 5, 2013

The Birth of Favicons

It's 1998. One late night a young engineer at Microsoft is working on the next build of Internet Explorer, version 5. He has this great idea in his head that he wants to implement. Why don't bookmarks (favorites) have an image next to them? Representing bookmarks as images would make it easier for the user to organize and find their bookmark. So he dives into the old Win32 COM Framework and sets up a scheme in which a server adds a "favicon.ico" file to their configuration's root directory, and any time a webpage is bookmarked, the browser will send a get request for the asset in question and display it next to the bookmark's name. Problem was, he wasn't supposed to be working on that. It wasn't a part of the project outlines. So he calls over a naive assistant project manager and asks if he can commit his build to the trunk. The assistant thinks it's a cool idea and OK's the commit, promptly getting yelled at the next day for insubordination.

With a team of over a 100 engineers, it's a little too late to revert the commit, and the feature stays in. Almost 20 years later the favicon is now the de facto "thumbnail" of a web page. Browsers now display favicons on each tab of a website. Shortcuts use it as an icon. It goes without saying that bookmarks makes use of the icon. Billions upon billions of failed HTTP requests looking for a "favicon.ico" on amateur websites that never added one can be blamed at the foot of one young engineer who had a neat idea went through with it, despite the bureaucracy.

It's not exactly the invention of the C programming language, but it's the small innovations like these that count. Currently Microsoft is trying to play catch-up to Apple with Windows 8. Perhaps if they had allowed innovation instead of stifling it, Windows as a platform and an OS would be extremely different today.

Thursday, August 29, 2013

Explain Like I’m 5: Why Pi never ends

Oh boy.

You just had to ask didn't you? Do you even know what's coming? You might think this is a simple question, with a simple solution, but that is most certainly not the case. I hope your body is sufficiently prepared, because your mind is certainly about to get utterly blown.

So here goes nothing.

Quick review: What is Pi? Pi the ratio of a circle's circumference to its diameter. That means if you were to measure perfectly a circle's circumference C and diameter D, and then divide C / D, you would get something around 3.14, or pi. Well at least in this universe. The answer you seek essentially has to do with the concept of countability.

Just what is countabilty? Well, basically, it means it's possible for you to count it. You can "count" apples, for example, but you can't count the political philosophy of democracy. There are things you can say "how much do I have?" and things you can't.

Oddly enough, although Pi is a number, it's not countable, and since we know it's not countable, we know that its decimal approximation continues on to infinity.

What?

I know, I’m trying to explain this to you like you’re 5. Let's back up a bit.

Let's start with the numbers you learned when you were 5. These are positive, whole numbers. Whole in the sense they don't get broken into fractions or decimals. These are called natural numbers, such as 1, 2, 3, ... up to infinity. These numbers are countable. That is, say I want to list every natural number on a piece of paper. Well, I would start by writing down 1, then write down 2, then 3, and so on until we listed all natural numbers! But wait you say! There's an infinite amount of natural numbers, you couldn't write down all of them, and I don't think a single piece of paper would cut it! Well, you're right. It's impossible to write them all down since they continue on forever. But, say you yourself had an infinite amount of time, energy, and pieces of paper. Well, in that case, it's *possible* to write them all down.

Don't worry if that seems weird. Let's look at negative whole numbers now. When we add negative whole numbers to the natural numbers we form a new set of numbers called the Integers. The big difference here is the additional negative infinity. So we have 1, 2, 3, ... up to infinity, but also -1, -2, -3, ... down to negative infinity. Again though, if we had infinite time, energy, and paper, we could list out all of the integers. We might do it differently, like alternating between positives and negatives, such as 1, then -1, then 2, then -2, etc. But we would eventually be able to list all of them given infinite time.

Continuing on, things are going to start to get a bit tricky now. Not all numbers are whole numbers. Some numbers are parts of whole numbers. Numbers like 1/3, 4.743, -2.5000, 5, are what we call "rational" numbers. Rational numbers include integers, so whole numbers are considered rational, but it also includes decimals, fractions, and other partial numbers too. So now comes the question, how do we list out every single rational number on paper? Or really, is it even possible? It seems like you can't. I mean, let's give it a try. I start with 1. Then I say 1.1. Then I say 1.01. But if I keep going in this pattern, there's always going to be a number less. I could have 1.00000001. But 1.0000000001 is less! It's impossible!

But you've got it all wrong, you see. You can still "count" them. The argument here is no different than the argument for listing out an infinite amount of natural numbers. This time however, the number can also be infinitely divided into parts.

It's better to think of rational numbers as fractions. Decimals will just make things more confusing, and you know you can always convert them fractions to decimals and vice-versa. A fraction represents a division of parts. 2/3's for example indicates a whole number is split into 3 parts, and you have 2 of those. Let's say we decide to keep "halving" the number 1. First we get 1/2. Then we get 1/4. Then 1/8. And so on and so fourth to infinity. Try to imagine actually "reaching" one of these numbers. For example, with 1/2, there are two possible "paths" to reach 1/2, and if we combined both of these "paths" we'd get 1, so there must be 2 paths. Same case with 1/4, except there are now 4 paths. 8 paths with 1/8, continuing on infinitely. The key thing here is that, no matter how high the number is, there is always a finite number of paths. That means we can count these paths. And by this logic, we can count the rational numbers. This doesn't just apply to 2's. We could do the same with 3's, like 1/3, 1/9, 1/27, etc, and get the same results. Does that make sense? I hope it does.

Now let's talk about Pi. Pi is what we call an irrational number. Irrational numbers, are any number that is NOT rational. You might think you can represent pi as a decimal, but can you represent it as a fraction? Go ahead, give it a try. The answer is you can't. Well, you *could* only list some of the decimal values of Pi, and as such get an approximate fraction. But this is not pi. It is only a rational number that is close to Pi. And that's the catch here! Let's start with 3 and divide it up into finite paths like we did before. No matter how many infinitely tiny paths we make, no matter how small the distinction is, pi will ALWAYS be between those paths, ever eluding out grasp.

And that’s that! If you're still confused, don't fret. This is not trivial stuff by any means, and I take no guarantees this would indeed make sense to a 5 year-old.

So to sum things up, Pi, along with all irrational numbers, are not countable. You cannot list all irrational numbers on a piece of paper, even if you had an infinite amount of time. And as from above, since you cannot count, or enumerate Pi, any attempts will always result in an infinite rational number approximation.

This post was inspired by a question on Reddit's "Eli5" subreddit. I may start giving detailed explanations to question I find interesting like I did here.

Monday, August 26, 2013

Okay what is this

And who am I for that matter? Welcome to The Medium, a place where I'll be providing you thoughts and rants directly dissected from my head. And I mean that sentiment quite literally actually. You see, in the 22nd century, society popularized the idea of extracting depressed, overstrung, withered engineers' thoughts from their volatile minds and providing them to the masses through the advanced social networking technology of the 22nd century, and delivered via the brute, rod-to-brain electrodes of the 19th century, originally used on the analysis of neural networks in rodents. Unfortunately, I was not able to obtain the advanced social networking technology they had, because I am not capable of time travel. I did however, obtain the primitive electrodes, as they exist in the 21st century. After some debauchery, and a frivolous pursuit to replicate the idea, I came to a startling conclusion: It turns out sticking electrodes in your brain is not at all pleasant for a human body, nor is it natural. So I abandoned the notion, but I felt the experience was too visceral to describe as anything but "literal."

Science, technology, engineering, mathematics. That's a lot of words. Let's call it STEM instead. STEM happens to be a "thing." It's not a great definition, but it apparently means something. And like all "things" in society, it is something humans seem have difficulty coping with. There are just uncountably many issues associated with it: Just what is the best mobile computer to use when you're busy overthrowing your government in the Middle East? How many programs does it take to construct a human that is mentally and physically capable of screwing in a light bulb? And why are there so few transgendered kanurian cyborgs with a markov-chained CAPTCHA tranny mark to be found the scientific labor force? Like a concerned lioness tending to her kin cubs, I am here to caress these answers into the sugary, delicious taffy of exquisite treatise that the masses demand.