On Your Shirt

We are now offering the following shirt designs for sale. You know you want one. All designs by Nikzen.

Archived Articles

Back to the Home Page

Yo Ho, Me Hearties, Yo Ho

During an excellent post over at Micro ISV Journal ( here) Patrick McKenzie asked for some help. He said that he wanted us to try to gain some leverage on uISV software crackers. As I am about to enter the uISV business myself (I have a product in beta that will be available shortly), I felt I should not only help him, but spread his message as well.

The basic thrust of his message is that you can hijack the Google results to "Your Product Name (crack[sz]|serial[sz])." I love this idea. I wish to try to get this website on the Google results first page, pushing other more nefarious links down a slot. So without further ado:

Bingo Card Creator crackz and Bingo Card Creator serialz can be found at the following link: here. While I haven't used this software, it seems it is very easy to find a crack or serial number or license key because they are so plentiful on the internet. I personally think cracking software is an excellent idea, particularly bingo software. And while it is very easy to write crackz (or cracks) for software, it seems that normal computer users have not been found to be able to do it. I am glad that there are crackers and hackers out there that are willing to go around the normal protections for us, which incidentally raises the price of software for everyone because software authors have to spend a great deal of time securing their software so that people don't steal it. In fact, I think we should crack all programs that make bingo cards. I think that we, as a nation of people, deserve the right to make bingo cards with other people's software irregardless of whether the author put time, effort, money, and his own blood into the product.

In my desire to help uISV software reach it lofty goals, I post the above paragraph in an attempt to fill another slot on the first page of the Google results for "Bingo Card Creator Serials" and "Bingo Card Creator Cracks." This is one way a blog can make a difference even if humans aren't the ones reading it. I encourage everyone who has a software program that has been cracked to write a similar article on their website to draw traffic.

YO HO!

Using Partial Functional Programming to Simplify and Improve Your Code

Everywhere I turn, I read about functional programming. Two major advantages are often espoused. The first is parallelism. Functional programming gives the opportunity for increased parallelization because purely functional languages operate without side effects (or at the very least, with quarantined side effects so that the processes produced are still parallelizable). The second advantage is that testing functional programs is easier because the result of each function only depends on its inputs. This is the dream of many a unit tester. It is this feature that motivated me to try another approach to some methods within business applications.

Business applications are often written in languages that do not support complete functional programming. They are often manipulating a great deal of state in order to achieve their goals. This can make testing very difficult and requires increased diligence in verifying the functionality of a program. I often run across code that is similar in feel to the following:


public class MyForm : System.Windows.Forms.Form{

// --- Omitted constructors and form variables for simplicity ---

  private DataTable myTable;
  private TextBox TextBox_Search;
  private DataGrid DataGrid_Results;
  private Button Button_LoadList;

  public void Form_Load(object sender, EventArgs args)
  {
    this.LoadList();
  }
  
  private void LoadList()
  {
    string searchParam = this.TextBox_Search.Text;
    if( searchParam.Length > 0 )
    {
      myTable = DataBase.GetFoo(searchParam);
    }
    else
    {
      myTable = DataBase.GetFoo();
    }
    this.TextBox_Search.Text = "";
    this.DataGrid_Results.DataSource = myTable;
  }
  
  private void Button_LoadList_Click(object sender, EvertArgs args)
  {
    this.LoadList();
  }
}

You will notice that most prominently the LoadList method takes no argument, but instead reads value from the state kept in the controls on the form. While it is the case that this code will function, this code is not the easiest to unit test, because it requires the instantiation of the entire form. Additionally, because the LoadList method is reading from a variety of different state variables it must intelligently defend against errors in the representations of those state variables, namely, which is the text of the TextBox is empty. Never mind that if the TextBox or DataGrid objects are not instantiated that it throws a null reference exception. Generally, in a form setting, this is not really an issue because the InitializeComponent method takes care of the instantiation of objects before the Form_Load method is ever called. However, the code just smells. I think there is a better way.

I suggest that it is possible to decouple the LoadList method from the state variables inherent in the form. Further, this leads to more maintainable, testable, and reusable code. Another example:

public class MyForm : System.Windows.Forms.Form{

// --- Omitted constructors and form variables for simplicity ---

  private DataTable myTable;
  private TextBox TextBox_Search;
  private DataGrid DataGrid_Results;
  private Button Button_LoadList;

  public void Form_Load(object sender, EventArgs args)
  {
    myTable = this.LoadList(this.TextBox_Search.Text);
    this.DataGrid_Results.DataSource = myTable;
  }
  
  private DataTable LoadList(string searchString)
  {
    if( searchParam.Length > 0 )
    {
      table = DataBase.GetFoo(searchParam);
    }
    else
    {
      table = DataBase.GetFoo();    
    }
    return table;
  }
  
  private void Button_LoadList_Click(object sender, EvertArgs args)
  {
    this.LoadList(this.TextBox_Search.Text);
  }
  
  private void DataGrid_Results_DataSourceChanged(object sender, EventArgs args)
  {
    this.TextBox_Search.Text = "";
  }
}

The above code accomplishes the same goal as the first version, but it does so by the use of a method that only depends on its inputs (I am ignoring database value differences here for the purposes of simplicity). In effect, this method decouples itself from the other methods, and potentially could even be in another class to be reused by other forms that have to present the same or similar information.

Taking this example a step further, I believe it to be possible to localize the use of state variables. In this example, this would mean moving as much code as possible from inside the form to another class that can more readily be reused. It also means that you can attempt to contain the manipulation of state variables to certain methods in your application. This increases the flexibility, utility and testability of the code. I imagine a clever implementation of an architecture that uses this technique would have very thin user interface code, and specified locations where state can be manipulated. This means that more and more of the code can easily be unit tested. Additionally, it means that there are not as many places to look when you discover a bug that is related to the state of your application.

We do not have to work in purely functional programming languages in order to get some of the benefits that they provide.

Share on Reddit

Domain Specific Languages Considered Jargon

Domain specific languages (DSLs) have been characterized as a a "mini-language" inside a general purpose language. They are specific to a particular problem space, and allow members of the team working in that DSL to communicate with each other in more efficient ways (not to mention allowing the team to program more efficiently). Jargon is the human language, as opposed to programming language, equivalent of a DSL. From Wikipedia.com:

"Jargon is terminology, much like slang, that relates to a specific activity, profession, or group. It develops as a kind of shorthand, to express ideas that are frequently discussed between members of a group..."

It stands to reason that the development of DSLs is not unexpected once you consider that many fields produce their own jargon. If we also accept that jargon only exists to save time when members of a group are communicating then we could also postulate that DSLs are expressly created to save the programmer time when he communicating with both his teammates and with the computer itself. The existence of this type of langauge construct in english and other human languages presupposes that humans are designed to interpret this type of language and readily have the ability to do so. This might explain why Lisp programmers often feel "confined" when they are working in something other than Lisp . Their language allows them to not only construct verbs (methods) and nouns (data), but also to stipulate the grammatical construction through which verbs and nouns interact. It allows them to define idiomatic constructs that make the problem at hand easier to solve.

Computer language programmers should learn from the existing human languages and make it easy to introduce new grammatical structures, while also defining new verbs and nouns. The existence of jargon is evidence to support that humans commonly do define new idioms, and that creating a mechanism for discussion within a particular realm is not only common, but effective. Often it is straightforward to learn the jargon of the domain experts. Being able to convert that jargon easily and directly into functional code seems like a step-function in productivity waiting to happen.

Scripting Languages moving to .NET platform

It seems that the "in" thing to do in the scripting world is to port your language to the .NET framework. Lisp, Python and Ruby are showing up on the .NET platform in a variety of incarnations. With Lisp it's dotLisp. For Python it is IronPython and Boo (while not strictly a Python port, it shares almost all of Pythons strengths and the author admits a heavy Python influence). For Ruby, there is a RubyCLR and Ruby In Steel. The questions are: who does this benefit? And, why?

The why seems pretty clear. Programmers want to use their home languages at work. There is a lot of development going on in the .NET framework, particularly in C#. Programmers realize that you can do a lot more with a lot less effort in the scripting languages, but want to not have to relearn the libraries of another technology stack. This is where the .NET framework comes in. It provides a strong foundation upon which to build a port of your favorite scripting language. Since more of these languages are open source, it is possible to look at the original code and do something similar, or at least draw ideas from the original (normally C) implementation. Long story short, better languages drive programmers to port them to a framework they know and use for their day jobs.

Who does this benefit? Clearly, it benefits the programmers who love to work in their favorite scripting language. It benefits Microsoft because programmers are not only working in the .NET framework, but they are porting some of the most popular open source languages to it. Does it benefit the employer? It could. The employer might be in a position to let a team (in a large company) or his company switch to a more productive language, providing that there is a critical mass of programmers. No employer wants to start a project in a language that requires only very expensive and very rare individuals. However, the tide in this area might be turning. As more and more programmers express interest in open source languages, and as Microsoft continues to support other languages (they hired the creator of IronPython) more and more employers are going to see the potential savings of having more powerful languages to develop in.

This is a trend we are going to see continue to grow. As more languages are ported, and they become more widely used more programmers will be knowledgeable in both the .NET framework and the target language. As more supply is created, the price of the programmer in that target language goes down. Hopefully, we'll start getting to work in the most powerful language we can handle soon enough.

Micro-ISV Mistakes

In my quest to establish my uISV I have done a lot of reading on the internet. Mostly the articles are theoretical, they don't deal in actual experiences or in real mistakes. Real mistakes are the best to learn from because somebody already made them and that somebody can tell you what to look out for. As part of a thread on Business of Software (BoS) I mentioned this to Gavin Bowman (uISV owner of V4 Solutions). He took a suggestion I made to heart and compiled a list of articles discussing actual mistakes made by uISVs. Here is the easiest place to get some information on learning from other people's mistakes (and making new ones for yourself!).

Hippo Fondue: Do they server that at Legend's Burgers?

Seriously, I couldn't make this up. My wife and I were in line at the local Legend's Burger (a 50's joint you have probably seen in lots of movies, not quite the one in Pulp Fiction, but close enough) and she turned to me and said something that I thought was Hippo Fondue. She claims to never have said anything remotely similar to that. [Note: my wife is australian, so she does talk funny.] So, later I am reading through posts on Reddit and I found a link to a place called WhaleSalad. I thought, "if that guy can have Whalesalad, than I can surely have Hippo Fondue." And a site was born. The End.

Long Programming Short

I have often wanted to equip myself to do programming in short bursts. I often think I can accomplish this in the check out line at the grocery store, or while commuting (voice recorder for driving, or handheld device for bus/train/carpool). I often think, "if only I could use all this dead time to get something done." Here's the problem: that dead time is dead because you don't have something to program that is small enough to fit a unit of it into that time, and still interesting enough to be worth doing.

Sure, I can write "Hello, World" applications all day at the 7-11, but I can't really implement an object relational mapper, or a compiler while waiting for my meal at Del Taco. Interesting problems are by definition hard. Anything that could be done in the "off time" of some other activity has already been done in every programming language, most of which you can just download from the internet for free, because the authors know it only took them 20 minutes to do.

How can we combat this problem and still end up using those lost minutes productively? One thought is parallelism. If you can break your huge task into millions of little tiny (different) tasks then you could conceivably just do a single miniscule task while you wait for your quadruple espresso. You have to maintain a list of the tasks, unify the code somewhere, and make the entry reasonable enough that you can do it anywhere.

Another idea is to use an iterative approach. Just do one small change. Refactor a single line of code, change an error message, rename a function in your program, put a copyright header on yet another file, adjust your make file, write a single test within a larger unit test framework, just do something. Squeeze in commenting the "why" of a function or portion of a function while the $3.50 gas flows from the pump to your eternally thirsty gas tank.

Can it be done? I'm not sure, but it's worth a college try at least.

Web Programming and Interpreted Languages are the Same Thing

Client side web programming and interpreted languages are the same thing. Each runs on top of a platform that provides a layer between the operating system and the application you write. Each requires the user to download (or have already installed on their machine) some sort of "virtual machine" that intreprets text directly or some form of middle language (CL in .NET, bytecode in Java, etc.). The only real difference (and it is a biggie) is that Internet Explorer or Firefox is effectively installed on every machine connected to the internet. Both of them sufficiently support a content language (HTML), a positioning language (CSS), and a scripting language (Javascript). Why aren't we writting application servers that deliver code (in a text form, like HTML is delivered) to the browser that can then be interpreted by a plugin for the specific scripting language it's intended for (Perl, Python,Lisp,Ruby,etc.)? This would give you the centralization available with web applications, and the GUI reaction speed of an interpreted program running on your local machine. Am I the only one who thinks this would make the delivery of programs really easy while also making them user friendly?

Software Engineer Dissatisfaction

I read a lot of my fellow software engineer's blogs. I read technical posts. I read books. I read message boards. I read. Today, while I was reading a rant about the poor practices of most of the recruitement / technical staffing agencies when it dawned on me. Maybe this is obvious to everyone else, but hear me out. Software engineers are always dissatisfied with something (everything?), and I think I know why.

We spend our entire lives trying to quantify, qualify and design systems to be efficient, functional and easy to work on. We can't help ourselves - we even do it to non-computer systems. Whenever we look at a system we instantly see how it could be better, or at least how we think it could be better. Why hasn't someone seen this before? We don't even work in that industry and we can see how it would be better, just based on common sense. This frustrates us.

We rebel against the "that's the way it is concept" because in our code we have ultimate control, things can be better if we are willing to put in the effort. Changes in code are much faster than changes in government, society or the recruitement process. Changes in our systems are effected at the speed of light while everyone else is waiting for the bus by the side of the road.

The uglier version of this sentiment is the root of why we talk down to other groups (salesmen anyone?). We see that their system is lacking, inefficient and driven by dogma. That's not even so much what is bothering us, it is more that people caught up in that kind of a system are unwilling to do anything about it. Why won't they fix it and make it easier for themselves, more efficient and more productive? I think we collectively look down at other groups because we think they are foolish for just grinding it out day after day and not trying to improve things.

We are a progress driven bunch. So, we reject anything that does not help us get somewhere new. Why do you think it is that this industry can produce an entirely new platform to work on every five years? We want something better. We keep trying. We keep working towards it, even if it's in small steps. Hence, anything that is not changing as rapidly is clearly not being worked on enough, perfected enough, improved enough. We are blessed to be dissatisfied with the status quo, but at the same time, we are constantly dissatisfied and grumbling about the lack of progress.

Character Density as a Measure of Programming Power

Having just written a post about how there are really only two sets of languages (work vs. home), I am going to compare the two (in my case C# and Ruby) using the metric I discussed in that article, namely that expressive power of an implementation (and by extension the language used for that implementation) can be measured by the minimization of characters required to define the exact same programmatic functionality.

I argue that if a program with the same functionality takes less characters to produce, then that program can be produced with greater ease and in less time. Less typing strain on a programmer's hands is always a good thing. For the purposes of this metric, I chose to remove white space from the count. I admit that this might skew the results towards certain languages; Python comes to mind because of its use of syntactic indentation. Other programming languages usually have delimiters that Python won't (C languages have {}, Ruby has begin..end, etc.).

It is a commonly held belief that less code results in less bugs. This argument is based around the assumption that the rate of bugs per lines of code is consistent across large and small programs (for an individual programmer, or for a group of programmers). So, if you have less lines of code, you have less bugs. I think it is more accurate to say that less characters of code means less bugs. Measuring by character takes into account syntactic sugar that a programming language offers, or special constructs it might employ to get the job done. By leaving the white space out of the count, we can balance the need for human readibilty while still preserving our desire to minimize the number of characters.

Another advantage of this metric is that it can be used to simplify existing code in whatever language you are working in. Even if you are working in a generally higher character count language, you can still shrink your code by removing unnecessary characters. I would caution that this technique should not be taken to the extreme. Program readibility and maintainability is far more important that character density.

Character counting is also very easy to implement in a variety of languages. You can grow your own in about 20 minutes including testing. I did mine in Ruby; it's 11 lines of code including the object definition. Another 3 lines of code was required to instantiate the object and pass it a message to count the characters in my target files. The total character count for my character counter is 411, 174 of which is the actual class, the rest is the calls to my specific (and rather long) file locations.

Domain Specific Languages (DSL) are something that Lispers (and sometimes Ruby users) point to as being a major benefit of the flexibilities of the language. Basically, a domain specific language is one that is purposefully geared to solving your specific domain problem. A good example is SQL. SQL is uniquely positioned to work with large groups of data. The advantage given by a DSL is that it has specific language constructs that help solve your problem (imagine trying to do SQL without Update, for example). These core constructs are either given their own syntax, or indicators (symbols that represent the use of the construct) or both. Often, since these constructs are meant to be used over and over, they are represented by short atoms, or a string without very many characters. Part of the advantage of a DSL is that not only does it provide you with the constructs you need often, but it does so with minimal typing. Reducing the characters makes it easier to use your specific important constructs.

Paul Graham talks about the advantage of renaming the lambda form in his version of Lisp (Arc) to fn. That's 4 less characters. He also mentions several other forms which he renamed in Arc, and how they are making code more readible and easier to produce. If you reduce the amount of characters you have to type in order to do common tasks, you have a net gain in program readibility and length (from length reduction you get error reduction as well). This improvements range from the renaming of commonly used functions to the simplification of commonly used constructs (in Arc's case, the let construct).

Basically less characters leads to less work and less code which leads to less bugs. Who doesn't want a quick metric to measure that?

Ruby vs. Python vs. Lisp vs. Smalltalk vs ... X

I have been reading a lot of articles comparing languages on reddit lately. They talk about Ruby, Python, Smalltalk, Lisp (and all its dialects, and all their comparisons), C#, C/C++, Java, et cetera. What strikes me most about these articles is that they are basically doing binary comparisons between two languages in an attempt to setup some kind of hierarchy. I don't think there is, or should be, a hierarchy for languages. I propose a new model, one consisting of two sets of languages. The sets have different entries for different people. The two groups are:

- The languages you use at work

- The languages you use at home

Most everyone can identify with both of these two groups. Obviously, different languages are going to show up in different groups depending on the person. Paul Graham can say that he uses Lisp (Arc?) for both home and work. I can say that I use C# for work and Ruby for home. In fact, my groups look like:

Home = (Ruby (learning), C#, Python (learning), Scheme, SQL)

Work = (C#, SQL)

From those lists it seems clear that my home languages are a superset of my work languages. It can be argued that programmers when working for themselves are likely to choose the most powerful language they can wield, because they aren't afraid they are going to have to replace the programmer, and one who knows the powerful language might be hard to find, or very expensive. Home programmers don't have to make this tradeoff, they are the programmer. [Technical note: I define the power of a language to be the minimization of characters that are requied to perform a specific task. A future article will explore this metric using C# and Ruby to complete the same simple task.]

Languages fall into two groups, those that are safe for large companies to use for large projects (work group aka Java, C#) and those that are more powerful but also have more risk if improperly used (home group aka Python, Ruby, Lisp, Perl, etc.). Obviously, there are people using Java and C# at home, and there are (very lucky!) people using Ruby, Python and Lisp at work. This doesn't negate my assertion that most people don't code at home in the language they use at work.

Because of the existence of these two groups, I think it isn't worthwhile to argue endlessly about whether Python or Ruby is better at some specific syntax or task. I think it would be more worthwhile to use the advantage gained from more powerful and flexibile languages (whichever you favorite might be) to produce software that is cleaner, shorter (in terms of code), easier to maintain, and more powerful. We shouldn't let the risk avoidance of businesses get in the way of our producing excellent software from more powerful constructs.

Take a chance. Fail. Then do it again. Don't spend your time arguing about what way is least likely to fail (because you are using the most productive language/framework/methodology). Nike got it right, Just do it.

Rock Soup, It Works!

At my current place of employment, we fail the Joel Test. We used to score exactly 0 (ZERO) on this test. At first, I thought this merely annoying, but as time passed I felt I had to do something about it. I was often distracted from what I should have been doing to deal with something that resulted from not having at least some of these processes in place. The tipping point was when I was trying to work too often from home, and I was having to copy the code and the database to my USB drive, and then install both on my home system before I could get any work done. Having freshly read The Pragmatic Programmer I thought I would give some of the techniques in that text a whirl.

My primary tactic was what they refer to as "Rock Soup." This is the practice of planting the seed of something better. You galvanize the actions of the group once they have a focus and can see the light at the end of the tunnel.

The first step was task and defect tracking. I, again, took Joel's advice and just starting using a spreadsheet. At this point, it was just me and another programmer working in a single office, so he saw me using it to record the things that we had to do, when we needed them done by, and just little tidbits of information. He started to have me label things in there by who was going to do them. Then we needed more columns to describe the nature of our tasks. Then we needed a more powerful system. He had already started a system that would allow us to make notes in our development product - a precursor to letting our users make notations in that same system.

Asking for forgiveness is easier than asking for permission. If you know something is right, just do it that way. Nevermind the boss who is breathing down your neck to get out a product six months before it could realistically be functional. Nevermind that you have several days worth of programming to finish this afternoon. Do it right. Later, you will be happy you did, when you spend less time testing, debugging and maintaining that very same code. Your boss will be happy too, because there will be less bugs reported against that code, and more customer satisfaction before they have to report a bug.

In this spirit, I made alterations to the task list system. I added various fields, split the tasks into sub lists, made the interface easier to use. This inspired my coworker to do the same. In the end, we had a fairly functional defect tracking tool that would allow us to print out reports for management to review. Could they understand what the bugs each meant? Doubtful. Did they love getting the report and watching the count move up and down? Absolutely.

Things started to get harry with the source code. In the past we had tried to use Visual Source Safe. It wouldn't allow us to work from home, and as the project dragged on, it appeared we would be working more and more from home (weekends, evenings, mornings, etc.). We needed a better solution. We asked the people in charge of the project (5 of them, 2 programmers - see a problem?) if we could get a better version control system. They agreed once we made it clear that their goals were aligned with ours, namely us working on the code easier. So, I research the best alternative for us (in our case, it was SourceGear's Vault) and we implemented it. A couple of weeks go by, and I start to get worried because our temporary license is about to expire. I talk to the heads of state, they are interested by I get no action. I buy the software. I install the license. I send them an expense report. This is normal operating procedure at most companies, but you would think that the company would be a bit more interested in protecting their quarter million dollar investment (in man hours alone) with a ~$600 version control system.

I'll stop here, because I am sure you all get the point. Just do something, especially if you know it is the right thing to do (like having version control software, or testing!). Get permission after it is done. Get your coworkers to buy in by doing it yourself and showing them how much better it is. Rock Soup really works.

Thoughts on programming languages

This article is reconstructed from some notes I made to myself on GMail (yeah, sadly, I write myself emails because I forget things). This is the sort of things that starts happening to you when you spend six weeks not sleeping. You start to wonder if you will ever get to sleep again. Then, it dawns on you, if you could only get more done, then you would get to sleep more than 45 minutes every 48 hours. What happens next looks like the rants of a madman.

madman rant starts here:

Languages manipulate meaning.

Languages are a leaky abstraction for meaning. [note: see Joel Spolsky's article on leaky abstractions.]

Therefore, programming languages themselves for[ce] the user to think in terms of the language [note: Is this a form of Sapir-Whorf affect?] because they are by definition a leaky abstraction that has to be worked around.

What is the easiest way to manipulate units of meaning?

How can we increase the density with which we maniplate meanings?

Mostly, we assign a meaning to a symbol, and then use that symbol to stand for that meaning. [circular login when you are tired, anyone?] Examples:

14 + 35 = 49

the "+" means "add them together", and the "=" means "the result is". Most of the human race has agreed on this symbology [okay, "symbolism" but you should all see Boondock Saints].

this could be rewritten a million different ways

+ 14 35

or

(+ 14 35)

or

(add 14 35)

or

(apply + '(14 35))

Because of the engrained nature of the add operation in our minds we have no trouble coupling that symbol to that specific meaning. It is when we get into something like english that we have trouble:

Like: "Jane is blue."

Does that mean Jane is an alien, or does it mean she is depressed? This ambiguity doesn't work well for computers. They want specific instructions. Therefore we need a language that minimizes the ambiguity (eliminating it would make it math, not language)

How can we minimize ambiguity? What tools can we give the developer to minimize ambiguity?

we need a structured way to write down algorithms and compositions of algorithms. In English and programming languages we use context to indicate some of the information and to reduce the possibility of ambiguity [You like how punctuation and capitalization go out the window when you are tired enough?]

Normally, we could build a hierarchy of concepts which gets us from very simple starting concepts to very complex systems like TCP/IP.

These layers of abstraction further obscure the actual manipulation of atomic units of meaning, creating further leaky abstractions.

All of programming is based on this concept. Create an operator out of the operators that you have already been given and work from there, all the way up the chain to whatever abstraction you need to stop at in order to get your job done.

If we cannot get rid of ambiguity, and we realize that all programming languages are based on composition of basic operators, then it seems that a language would be more and more useful as it became easier and easier to compose the operators together, along with their data. Also, ease of manipulation of these compositions is a factor (think macros).

the more meta information we have about an operator, the less the operator has to actually show in code. Back to the "+", everybody knows what this means, so general meta knowledge about "+" is very, very rich in individuals. We either have to educate individuals to the level that they understand "+" at, or we need a formal language for describing the meta data of the operator.

How can we minimize code? By identifying patterns in meaning, and using a single operator to express that pattern.

End of madman rant

Alright, I have slept some since I wrote this. I have let the ideas roll around in my head. I am rereading it as I write this article, over and over. I admit these assertions are based on accepting a string of postulate based (as in, I can't prove any of the above statements are undeniably true) arguments. However, I do think it brings us to an important point. The point is hidden in "identifying patterns in meaning, and using a single operator to express that pattern."

Most modern programming languages (I cannot speak for all, but the ones I am familiar with) concentrate on the expression of algorithms. There is even an algorithms class in most undergraduate computer science programs. It seems algorithms are central to the creation and discussion of programming. Algorithms are made up of units of meaning. We just assume units of meaning because our human brains are used to assigning meaning to a symbol (written) or a sound (verbal).

To my knowledge nobody has given any thought to finding patterns in units of meaning. If, as my sleepless mind postulated, we can find regular patterns in the use of units of meaning, then we could assign a symbol to those more generalized patterns, allowing us to write more succinct programs using the new symbolic language. Think of it as writing a function that takes another function as an argument along with the arguments to be operated on. We find the general patterns of execution for types of meanings in human language, assign them a symbol, and then use those patterns with specific meanings to generate actual code. This would significantly increase the density of the units of meaning, because you are implicitly placing a single meaning into the context of a more generalized pattern. It requires less symbols to express more meaning. More succinct expression is more expressive power. Paul Graham agrees.

Is this possible? I have no idea. Am I going to try? Absolutely.

[Note: If I am completely daft and somebody has already gone down this road and come back again, please email me with a reference to their work. I doubt that I am the first person to consider programming languages from this angle, but I am ignorant of other's attempts.]

Back to the Home Page