Search This Blog

2015-01-15

CPAN PRC 2015 - January (Plack::Session::State::URI) - Part 2

See the previous post in this series here.

Things are moving along slowly, but surely. I mentioned the GitHub issue that I created last time with a preview of a tidy branch. To my satisfaction the author replied and said that it looks good. I could submit that any time for a pull-request and technically have completed the challenge for January, but I'm not ready to yet. I would like to see if there are any other tidying tasks to do before I actually submit that branch for merging.

I also brought up the idea of using a proper HTML parser for the form field. Initially, the author sounded quite against the idea due to performance reasons. I tried to explain that an HTML parser is necessary if you want the module to always work properly no matter what HTML it is processing, and as a compromise I suggested toggling between the existing regular expression and a proper HTML parser with a constructor argument. That way users can choose what they want, and hopefully we can pick a good default for the users that just don't care. The module maintainer seemed to like that idea!

He also discussed the existing solution and came to realize that he already has an HTML parser! He is using it to add query string parameters to all anchor (<a>) tags in the document. It is called HTML::StickyQuery and it is a subclass of HTML::Parser. So bonus. We already have an HTML parser and the performance is fine (I assume). There is a problem though. HTML::StickyQuery doesn't appear to allow any hooks into it. I can't see an easy way to do what we need to using the existing HTML::StickyQuery instance.

There is another module, which HTML::StickyQuery actually references in its documentation, called HTML::FillInForm that seems to do exactly what we need here. To add a form field containing the session identifier to all forms. It turns out that HTML::FillInForm is also based on HTML::Parser! Alas, again it doesn't look like there's an easy way to mash these modules together to accomplish both goals with no additional overhead. I'm trying to figure out how best to solve this problem.

A horrible hack of an idea was to attempt to dispatch between both modules' methods depending on which type of tag we are currently processing. My most recent comment on the GitHub issue describes a very ugly way to possibly accomplish it without duplicating any code. The downsides are that it is abusing the modules' internals, attempting to mash them together into a new super object, and then dispatching appropriately. This will only work if the modules' internals obey the assumptions that I am making and continue to do so (upstream changes to the internals could break this solution). It also assumes that each object has distinct hash keys so they won't collide and overwrite each other. Obviously I can't actually go ahead with this idea, but it's interesting anyway to consider it.

A more stable solution might be to manually merge the code from both modules into a new module that accomplishes both. That way upstream changes can't break us. The downside is that we would be duplicating code and we wouldn't automatically get bug fixes or feature enhancements from upstream.

I'm not overly familiar with HTML::Parser, but the way that these subclasses work doesn't impress me much. They are designed to own the entire document. They track their changes in an output string that they just concatenate to as they go. Firstly this makes it impossible to combine these subclasses without parsing multiple times (or resorting to hacks that aren't guaranteed to be stable). Secondly, I am assuming that there is quite a performance penalty for all of the string concatenation. I'm not sure how perl would cope with that, but I suspect that it would be better to store a list or array of strings and join() them in the end. It would be nice if we could manipulate the document structure in object form so that many changes by many different, unrelated modules could occur. Certainly there is a performance hit for keeping this in memory. Perhaps the problem is that HTML::Parser is too low-level for our needs.

I'll have to do some more research to identify if a better option exists. I'm not sure just how reliable HTML::FillInForm and HTML::StickyQuery are anyway. They don't strike me as being very robust. I'm sure they'll work for limited purposes, as I suppose is true for my assigned module too, but I don't think it would be hard to find cases in the wild where they don't work. For example, I saw one was testing the URI scheme for /https?/ and skipping non-matches assuming they were alternative protocols like ftp:// or mailto://. Instead they could have just been local relative URLs with no explicit scheme (unless the implicit scheme is filled in by a third party module or something, but I didn't notice any such thing). Those kinds of links would be unaffected by the use of these modules. In general I can't recommend these modules for robust Web applications, but they might get you by if you have limited time and you're lucky that your Web site is compatible (or you develop it as such).

I have to try to put together a few test cases that break the existing regular expression, and then I can go ahead and try to figure out how to integrate HTML parsing into the <input> injection. I currently plan to submit 3 pull requests. My tidy branch, a new branch with a test case to break the regular expression, and another branch based off of the test case that attempts to integrate a proper HTML parser (this way if the maintainer doesn't like my solution with the HTML parser he can still merge the test case and fix it himself). I currently have another branch, http-status, that just replaces magic numbers in the project with Http::Status constants. I'm not sure yet if the author will like that added dependency (albeit, I don't imagine it's a big or unstable one) so I might keep it separate. Otherwise, I might end up rebasing it into the tidy branch. We'll see...

2015-01-11

CPAN Pull Request Challenge (PRC) 2015 - Introduction / January (Plack::Session::State::URI)

I have signed myself up for the CPAN Perl Request Challenge (PRC) 2015 as organized by Neil Bowers. The idea is that every month you are assigned a CPAN module at random that is hosted on GitHub and you have to submit at least one pull request for it. I heard about it a few days ago when Olaf Alders announced it to the Toronto Perl Mongers list (again?). It sounded like a neat idea so I decided to sign up.

My January assignment is Plack::Session::State::URI. I began by creating an issue on the GitHub repository requesting feedback and guidance from the author. Unfortunately, I haven't received a response yet. The module is fortunately very tiny. Unfortunately, it is based upon Plack, which I have limited experience with (at least I have minimal experience at all...). There are a few things that could obviously use some work.

  • There is very little documentation. In fact, there's so little documentation that I just have to infer what it does myself. Fortunately, I have a background in Web programming so it was no real mystery. Unfortunately, that still makes it difficult for me to test it. And worse, it makes it difficult for me to imagine how changes I make will affect users of the module.
  • There are very few tests. Granted, the module only has about 60 SLOC so how much testing is really needed? That said, I believe that several more tests could be possible. Many of them will probably require a richer understanding of the Plack::Middleware framework than I possess. It would be lovely to acquire that extra needed experience, and perhaps this is a good opportunity to do it, but I'll need to try to balance it with the other aspects of life.
  • Parsing HTML with a regular expression. Of course, regular expressions cannot properly parse HTML, SGML, or XML. You need a proper parser to do that correctly. Several such modules exist on CPAN. Unfortunately I only have experience using XML parsers in Perl, and HTML is far too forgiving for an XML parser to get along with it. I'll need to learn to use an HTML parser with the power to edit the document contents, preferably without altering the formatting of the existing document. This is probably the most useful thing that I could do for this project at the present time, but I'll need to identify the module to use and learn to use it properly first. If I do pursue this problem then I can also add test cases that break the regular expression parsing and use that to verify that my fix works.

Since I haven't received a response yet from the module maintainer I have begun doing my best to come up with work to do. We all have personal preferences when it comes to code formatting, structure, and style. I've tried to avoid rewriting too much code, but at the same time I've tried to address things that I believe could be improved. I expect the author will not like some of the changes that I've made so I'm trying to make it clear that I can rebase as necessary to keep the changes that he likes while getting rid of the changes that he doesn't. I'm also trying to keep my commits nice and small so they're easy to understand and hopefully are small enough that controversial changes are isolated and easy to leave out.

I currently have a work-in-progress wip/tidy branch. This will probably eventually become my first pull request for this module. That branch is not considered safe yet. I am currently force pushing to it as I edit history and try to get things just right. I'm pushing mostly as a backup mechanism. In order to publish something that the module maintainer can review without worrying about the branch getting destroyed out from under him I have published a static snapshot to preview/tidy/1 and submitted an issue to document it. The idea is that until the wip/tidy branch is read to be merged I will continue to rebase it and publish snapshots to incrementing integer revisions as in preview/tidy/2, preview/tidy/3, ... preview/tidy/n. When it's finally done I'll probably delete wip/tidy and publish a tidy branch instead for the pull request. I have lots of experience with Git, but I have no real experience with pull requests. I mostly use Git by myself so I haven't had a need for pull requests and generally don't have to worry about people tripping over history edits either. One of the things I can hope to learn from this experience is how to do nice pull requests and also how to manage history in the public eye without sacrificing history cleanliness nor causing undue pain for others.

Hopefully I hear back from the maintainer soon so I have an idea which direction to go in. I quite enjoy writing (just look around you!) so I'd be happy to add rich documentation to this module too. I just don't know exactly how robust it is and don't want to go making documentation up for it. It would be nice to get feedback directly from the author or others who have used it so that I know better what the strengths and weaknesses are and could document them appropriately. I wouldn't want to lure people into wasting time with this module if it can't do what they need, and similarly I wouldn't want to chase them away if it will do exactly what they need. Perhaps I should work on setting up a basic Web site with it and seeing just how well it works. That is quite a lot of work though. There are advantages to it, but these days I am typically not a ball of energy at the end of a work day. We'll just have to see what I can accomplish in the coming weeks.

2013-07-16

New Domain Name!

I've decided to change domain names of this blog. The original domain was bamccaig.com. I began using that name in college, in part because I couldn't think of anything clever that I was satisfied with, and in part because it was formal and avoided any kind of negative backlash that some screen names may attract. It is built from my real name: Brandon A McCaig. The thing is that nobody could ever pronounce it, and on that same note it was difficult for others to remember how to spell it. Most didn't realize that it was based on my real name, and would try to pronounce it as a single word. Inevitably, it was shortened to "bam" and variations thereof. For example, "bammers", "bambam", and "bambams". When your initials are B.A.M. you naturally explore it as a name. I obviously considered "bambam" in the past, but didn't want to be compared to the Flintstone's character and disregarded it.

Before long I realized that it had grown on me. I began choosing it as my username whenever possible, but "bambam" and "bambams" are relatively popular so it can be a struggle to secure them for myself. I remember having to wait several months before "bambams" became available on the Freenode network. Of course, when I saw my opportunity I seized it! Somebody else is using "bambam" still. Somebody else is also using "bam". The latter would be a good name in the same style as those hackers that came long before, such as dmr or rms.

Since it is now a nickname that I take pride in I decided to register bambams.ca and move my blog to it for now. Eventually I may opt to develop a more personal Web site with a custom blog being in a prominent corner, but for now this will suffice. I still hold castopulence.org and castopulence.com for more professional ventures (though they mostly serve as placeholders while I remain employed full-time).

I have other nicknames in gaming communities. Among Counter-Strike crowds I am better known as "freefall" or variations of "beer", "free", and "music". My Steam display name is usually ..FreeFall.. --beer --music. In all of my years playing I have only noticed one player recognizing the beer and music tokens as command line options. Originally they meant that I was drinking beer and listening to music while playing. They were so consistently true that they became part of my identity.

For now bamccaig.com is pointing back at my Castopulence Web site. I have a virtual host setup that should redirect it to bambams.ca, but it isn't working so obviously I have the virtual server configured wrong... OK, I got the <meta> redirection working as intended, which will suffice. It turns out that I just had really sloppy Nginx config (arguably I still do, but I'm a n00b with Nginx, and I don't really have time to spend on it so it is what it is).

2013-03-12

Singing/Rapping - Acapella - 50 Cent's "My Life" - Eminem's Verse and Hook

Just messing around, challenging myself by trying to perform Eminem's verse to My Life. And for the fun of it, trying to sing the hook (which is sung by Adam Levine).

Let me know what you think. :)

2012-04-29

New Counter-Strike Spray!

I can haz "hugz"?

This is my new Counter-Strike spray (with a lot of help from "allefant"). Players are able to spray their own images on the map surfaces while in-game. It's common to see jokes, as well as more "explicit" images. Some players choose to personalize them as well. Many players seem to only spray their spray after defeating somebody, sort of as a way to rub it in. For me, spraying my spray is part of my routine. My beginning-of-round routine basically consists of spraying my spray at my feet, followed by saying "Affirmative!" / "Roger that!" or "Sector clear!" on the team "radio", and rapidly switching back and forth between my gun and knife (sort of a warm-up routine).

Since most won't get the caption, I'll try to explain it. I predominantly play the gun-game mod, and knifing is my preference. The object of the game is to "level-up" by getting kills, but you can steal other people's levels by knifing them. So a "hug" from me in-game means I was holding a knife when I did it, and you lost a level. You are usually left with either humiliation or frustration (or a smile). xD

The name "FreeFall" was actually inspired by an Evanescence song, "Weight of the World". They are my favorite band and I was listening to The Open Door while trying to think of a handle. Unsurprisingly, most people think of "Free Fallin'" by Tom Petty when they see my name, and many attempt to sing the chorus in-game as a joke (sometimes to mock me). xD

Amazingly, the first person to ever ask me about my name actually asked if it was from a song. I said yes, but probably not the song he was thinking of. To my amazement, he actually named Evanescence! I was amazed that somebody else would randomly get the reference. I don't think it has happened since. ^^

The "--beer" and "--music" bits were originally meant to resemble command line options, and were flags to indicate that I was drinking (so my general abilities might be compromised) or listening to music (so at least my hearing is compromised), respectively. Most people call me "Beer" instead of "FreeFall" though (some have even called me "Free Beer") so I often don't bother to remove the "flags" when they don't apply. They've basically become part of my name.

I'd really like to post some YouTube videos of me playing, but I've been pretty unsuccessful in actually turning demos into video. I'm also a n00b when it comes to video authoring so I'm not sure how to edit out the boring parts. Maybe eventually... Another challenge is actually recording demos that are worth posting. ^^ I've had a few, but often I don't think to record until /after/ I've done something noteworthy. I digress...

I've been playing Counter-Strike: Source for about 5 years now, I think. Until now I don't think I've changed my spray since I first created it. My original spray was a drawing of an angel girl that I ripped from a tattoo artist's Web site. If I had the artistic ability I would have removed or crippled one of the wings in reference to Sephiroth from FFVII, but alas I do not. I often use FFVII: Advent Children graphics for avatars as well (I think my Steam one still is).

Maybe someday I'll preach about the honorable way to play Counter-Strike. :) For now, you'll just have to enjoy my spray.

2012-04-23

DVCS Branching - Why Mercurial Sucks and What Mercurial Advocates Are Missing

Introduction

Version control software is basically used to track changes to files (usually plain-text files) over time. It is primarily used by software developers to track source code changes. It used to be that a centralized server was used to host a centralized repository of changes, and clients would connect to it to checkout or check-in changes.

Then came a revolution known as distributed version control. This model basically eliminates the centralized repository and gives every user (developer) their own complete clone of the repository.

There are basically two popular distributed version control systems (DVCS) available right now, and both are free/libre/open source software. They are Git and Mercurial. Both were developed to replace BitKeeper for use in maintaining the Linux kernel, but only one was written by Linus himself so the other one sucks. ;) There are other free ones and also commercial ones, but I don't know of any good reason to use them. Linus Torvalds did at one point say that if for whatever reason you needed a commercial DVCS that BitKeeper was the one to use.

Git and Mercurial work relatively similarly. As far as I know, they both represent history internally using a directed acyclic graph (DAG). Combining this with the distributed design means that branching and merging happens to work quite well in both (it has to). Effectively, whenever you work in a separate clone of the repository you create a new physical branch, even if you don't mean to. As a result, you often have to merge when you synchronize your local repository. That only tells part of the story of branching though.

Branching In Mercurial

The Mercurial community has basically come up with 3 branching strategies that you can use with Mercurial. The reason there's so many of them is because none of them really works very well. Proponents look for ways to justify these misfeatures, but when you actually put the strategies into practice you see that it's nonsense.

Named Branches

The core branching feature is called named branches. This is what the `hg branch' and `hg branches' commands manage. The implementation is a little bit finicky. It's effectively a name attribute stored with each changeset. That's all there really is to it. I imagine it must have been inspired by Subversion properties or something silly like that. The way it works is that you are basically free to set the working branch name anytime you want with `hg branch <name>'. The attribute is remembered and when you commit it is stored in the changeset as a permanent part of history. The head of a named branch is basically the most recent commit with that branch name.

Branches in Mercurial are shared by default. If you want to only synchronize selective branches then you need to explicitly specify only the ones you want synchronized with every relevant command. I find it extremely tedious. It basically makes it difficult for anyone to work under the radar without distracting others. In a big project you can expect that many people are not going to be interested in the work that others are doing. You're only really interested in the parts of the project that impact you in some way and don't want to be bothered with all of the other noise. Mercurial makes it difficult to do that.

Another problem is that in this distributed environment there is only one namespace for branches, and branches are permanent parts of history. This means that bad things can happen if two completely distinct developers happen to choose the same branch name for different branches. It also means that if you just want to create an experimental branch and throw it away later then you are forced to share it with the world anyway (short of history editing, which the Mercurial community also discourages). A workaround that was added later is `hg commit --close-branch', which basically adds a new changeset with a new special attribute to indicate that the named branch is "closed" (i.e., finished with). This basically just tells Mercurial to hide it from view by default. The branch is automatically reopened though if you accidentally commit onto it again. In my experience, it's very difficult to keep things straight when the entire universe's branches are always available to the `hg update' command, which is used to update your working copy to some specific version (e.g., the latest) as well as change branches.

Repository Clone

What the Mercurial community would have you do for experimental changes that you may wish to throw away is clone the repository again and work in a throw-away repository instead. Due to the nature of the distributed DAG history you are actually automatically branching whenever you work on a cloned repository. This solution does work, but it's very Subversion-like in that you have to manually manage "branches" within your file system. Mercurial will use hard links on supported file systems to save space if you clone locally (basically each clone will share whatever physical files on disk that it can), but that's little comfort. In my experience, this strategy doesn't work overly well, and it clutters your project namespace (e.g., I generally keep all source code projects in ~/src or "%USERPROFILE%\src").

You also can't directly share these experimental changes with other people or with a shared remote repository without pushing a new anonymous head and screwing everybody up. At best you could tell your collaborator to explicitly clone anew, and keep the history separate from the main history, just as you have done, but that's a very error-prone approach.

Bookmarks

A lot of Git users complain about the branching options in Mercurial, and the Mercurial community's solution was to offer an alternative branching mechanism that is similar to how branches work in Git. The solution is called "bookmarks". Bookmarks began as a Mercurial extension, but has since been merged into the core.

A bookmark is basically an association between a name and a commit identifier. These associations are stored in .hg/bookmarks. When you commit on a bookmark the bookmark is automatically updated to point to the new head. Since bookmarks are stored outside of history they can be easily created, renamed, and deleted at any time. They are very lightweight compared to the other branching options. Indeed, they are similar to a Git branch. The problem with bookmarks is that they aren't shared by default. If you choose to use bookmarks to manage branches in your project then you either have to tediously communicate each bookmark with all collaborators or risk them getting quite confused because there will be multiple seemingly anonymous branch heads when they pull your branches in.

Basically, you have to explicitly push bookmarks with `hg push -B <name>', and explicitly pull them with `hg pull -B <name>'. You can check for new bookmarks in the remote repository with `hg incoming -B'. Unfortunately, this puts a lot of responsibility on the collaborators to manually keep bookmarks up to date. The documentation says that once both sides of the connection have a bookmark it will be updated automatically, but that's not much help. Without them, at least, Mercurial is quite happy to jump branches on you without really raising any alarms (I'm not sure it would stop you even with the bookmarks, to be honest, though I've been told it will). The current implementation is just not sufficient for real world usage. Most of my collaborators are even in the same room. I can't imagine if you had people spread out across a building or the world.

Branching In Git

Git is quite smart about most of what it can do. A branch in Git is a much more logical concept. You get to think of Git branches as tangible things. There is basically only one strategy for branching and it works quite well. Perhaps most importantly it's distribution friendly.

Git manages your branches for you. A branch is actually represented by a very light file that basically just references the head commit identifier of the branch. This allows you to create, rename, and delete branches at will, very efficiently, similar to Mercurial bookmarks. Git takes care of efficiently storing the actual history for you so you can be sure that it doesn't duplicate anything needlessly. The operation of switching branches is also different than the operation of updating your working copy [fix:] local branches (`git checkout' and `git pull'[1], respectively).

Git branches are not all implicitly synchronized with a remote repository. If you `git push' then only the current branch's commits will be pushed. Similarly, when you `git pull' then only the current branch will be pulled. Branches are namespaced by repository. Local ones are just in the global namespace, but remote branches are namespaced by the remote name. For example, origin/master refers to the 'master' branch of the remote 'origin'. This puts the user in complete control of their repository. The remote might refer to a branch as 'foo', but you can use a local branch named 'bar' if you want to. Git doesn't implicitly synchronize any branches with remote branches. You have to explicitly associate them in configuration before they will implicitly be synced. Otherwise, you can explicitly choose which local and remote branches you want to synchronize.

Conclusion

Mercurial proponents typically try to defend Mercurial's branching madness with excuses that branches should be permanent history and that it's a feature, not a bug. The reality is that Mercurial has a very non-distributed nature to its branching policies. Instead of letting each user be in control of his view of the world, the Mercurial community seems to be in favor of encouraging everybody to always share the entire world with the entire world.

In short, Git is a much better DVCS for branching (and in my experience, pretty much everything else too, but I digress). Mercurial is a good alternative for people that need extra hand holding and don't plan to make extensive use of the tools. Branching in Mercurial is so painful that we basically don't do it where I work, just as we didn't really do it with Subversion. The little bit of branching that we do do ends up being ugly and distracting and it's very error-prone in my experience.

References

[1] - `git pull' is equivalent to `git fetch && git merge'. Note that if you're wise to it then you can use git rebase instead of git merge if you want to linearize [local] history.

2011-09-01

"-//CS//Lyrics 1.0//EN" AKA Lyrics DTD/XML/XSL/XHTML/CSS

I know I hinted towards a technical blog post last time. I expected my next post to be about Java or Perl Web programming. Well, that one is coming, I suppose. I am making ground and eventually would like to write about it. I don't think I'm quite satisfied with my progress yet though (nor am I so dissatisfied that I need plea for help yet). This post is somewhat technical, but it's also somewhat personal.

I love to sing. I sing all the time. In the shower, in my car, at the computer, etc. Everywhere, basically. I love music and singing is one of the things that I like most about it. Or perhaps, more accurately, the lyrics are. I love lyrics. For me to like a song the lyrics had better be clever. You get bonus points if they're emotional or humorous and I can relate to them. I guess then it shouldn't be surprising that I also love to rap.

Indeed, it wouldn't be an exceptional day for somebody to notice me driving by them in Sault Ste. Marie with my stereo cranked, singing or rapping along. Whenever I hear a new song that I like I'll look up the lyrics online (in my younger days I would do it from the CD cover, but a wide screen monitor is a lot bigger and easier to read from in the dark). I'll then practice reciting them while listening to the song(s) on repeat. This could mean hearing a new or leaked song on YouTube and just starting the same video over and over again (shout out to YouTube Repeat!!). More often, though, it means that I just bought a new CD from one of my many preferred artists and am listening to it on repeat.

The Amarok media player is great for this because it supports a lyrics module that allows you to view the lyrics for the current song right there in the media player automatically! It's great for people like me that enjoy analyzing and learning the lyrics of a song, or even just singing them after 6 or 8 months have gone by and they aren't fresh in your mind anymore.

Often I find that the various lyrics sites online don't have 100% correct lyrics. Some people do a really bad job of transcribing them. I don't know if it's laziness or incompetence, but I personally find it really annoying when I'm trying to sing or rap along to a new song and I'm stumbling over the wrong lyrics on a page.

It seems that most user maintained lyrics sites online make it difficult to contribute though. You generally have to register with some random site, which means a whole new set of credentials to remember, and a whole new server with your personal information just waiting to be hacked and compromised. They also don't do a very good job of describing the lyrics or presenting them to the user. There's usually a mix of slang and unslang and it's never really clear when the transcriber is sure of a particular word or just making an [un]educated guess. You may be surprised just how often a word or two is a mystery in very popular songs.

This combined with my thirst for XML knowledge lead me to develop a simple XML document type for lyrics. It's a work in progress so nothing is feature-frozen yet. Currently I have a document type definition (DTD) to validate against, an XSL stylesheet to transform the lyric data into standards compliant XHTML 1.0 Strict, and a CSS stylesheet to accompany the XHTML output to prettify the output a bit (no guarantees that the CSS is standards compliant, unfortunately and the W3C's validator seems to think it is valid CSS 2.1 also!).

Currently I only have two songs transcribed. The first is We As Americans, by Eminem, from the Encore bonus CD. That was basically my initial test drive, if you will, for the format and stylesheets.

Two evenings ago I got to thinking about one of my favorite bands, Eisley, and wondering if they had released any new albums since I last checked. It turns out that indeed they had! The Valley was released back in March. I made it a plan to stop by the music store yesterday to look for it. Now Eisley is somehow still somewhat of a below-the-radar band, which is puzzling because they're so amazingly talented and creative and beautiful. In any case, a side effect of this is that music stores often don't stock them (some of them don't even have a listing for them!). I could rant all day about music stores and the music industry and how stupid things have gotten (see my previous post about the movie industry for a hint of what to expect), but I'll try to avoid doing that now and save it for another post.

Long story short, I went out to buy one CD and came home with seven...and none of them was the one that I went out for! "But, Brandon," I hear you wondering. "What about Eisley?"

Oh, I ordered it. Don't you worry. As expected, the music store didn't have it in stock. Unfortunately, it cost me $20 to order through the store. The night before I had seen that the online retailer linked from the band's Web site was only asking $10. Without knowing the shipping fees though it's hard to say if I'm any worse off. Not that I'd mind buying an Eisley CD for $20. In fact, I think I probably paid $20 for each of Room Noises and Combinations too. What bothers me is that Eisley will only see a tiny percentage of that, if any, and the crappy music stores and record companies that I'm avoiding ranting about will take most of the profit.

Moving on, the CDs that I actually came home with were as follows in no particular order: Theory of a Deadman (Theory of a Deadman), The Truth Is... (Theory of a Deadman), Taylor Swift (Taylor Swift), Greatest Hits (Guns N' Roses), 22 More Hits (George Strait), American Saturday Night (Brad Paisley), and This Is Country Music (Brad Paisley). I was also trying to buy or order The Spirit Room (Michelle Branch), but not only did the music store (the only one left in my city to my knowledge) not have any Michelle Branch CDs in stock, but her CDs weren't even available from their distributor peoples! Ugh.

So anyway, having bought all of these CDs, I then had to rip them into my computer so I could sync them with my iPod (I could also rant about how much Apple sucks, but there's a different time and post for that too). In doing so, I found myself listening to old and new music and singing along and by the time 2 AM rolled around I wasn't really tired. In fact, I couldn't seem to stop singing or rapping. I tried going to bed, but eventually just started to rap the lyrics to one of the songs I had written back in high school, and then decided that I should transcribe it and publish the lyrics on my site. That way I'd have a digital copy (it was quite a challenge to remember the words after so many years), since I had destroyed, lost, or abandoned all others copies. I eventually gave in and decided to do this immediately while it was fresh in my mind (both to do it and most of the lyrics).

So the second song I have transcribed is actually my own! It's called Wanna Be God. I separated it into a separate file to initially keep it off the radar and also to keep it separate from "real" songs. :) There are a few other songs that I wrote back in high school. One or two of them I might feel are worth publishing online also. We'll see if I ever get around to that.

In the meantime, please check out my XML document type, and accompanying DTD, XSLT, and CSS. And, of course, check out the lyrics to both songs:

Eventually I'd like to implement a server-side backend to manage them (once I get either a Java or Perl Web environment setup to my satisfaction, of course ;), at which point I could potentially open the system up to open ID users and let the world help me to fix lyrics. I'm not yet certain where I stand with regards to copyright infringement though. Technically I am violating Eminem's (or his record company's) copyright by publishing the lyrics to his song without permission. I do have a disclaimer on the page to highlight the fact that I don't own the copyright... It seems harmless to me, but nevertheless, there is always the possibility that somebody will feel they are losing money as a result and try to convince a judge to make me reimburse them... Hopefully that doesn't happen... Honestly, I can't even figure out how I would contact Eminem or Interscope to ask for permission. Their Web sites are terrible. :P

If you are or represent a copyright owner and wish for me to have a work removed from my site then please kindly address an E-mail to me (bamccaig at castopulence dot org) detailing who you are and what relationship you have to the copyrighted work, and the copyright owner's wishes, and I will promptly add any disclaimers you want or, if necessary, lock that work up behind a secured connection for personal use only.

2011-06-09

UNglobal State Programming Environment

A year or two ago I stumbled across a very useful Google Tech Talk on YouTube about global state and Singletons and why they're bad:



It makes a lot of sense and after letting it sink in and experimenting with dependency injection myself I can say that it results in much better designs, whether or not you're writing tests (the video is Java-centric and test-centric, but neither is necessary for it to apply). Thinking about it though popular programming environments do impose a lot of global state that is in my experience regularly accessed without a second thought. When you collaborate with other people it's hard to make them understand how global state and Singletons are bad when to them it's the easy and fast way to get their job done. It's my opinion that a large part of the problem is that programmers are just used to global state from their programming environments. What we need is for languages to be updated and newer languages to be done right in the first place. No more global state in the programming environment. Pass that state into the program's entry point. For example:

// C
int main(const system_t * const system, const int argc, const char * argv[])
{
    fprintf(system->stderr, "This was a triumph.\n");

    return 0;
}

// C++
int main(const System & system, const int argc, const char * const argv[])
{
    system.cerr << "This was a triumph." << std::endl;

    return 0;
}

// C#
private static int Main(System system, string[] args)
{
    system.Error.WriteLine("This was a triumph.");

    return 0;
}

/* Java (albeit, while they're at it, main should return an int in Java too) */
public static void main(System system, String [] args)
{
    system.err.println("This was a triumph.");

    system.exit(0);
}

Etc. Wouldn't that be a wonderful platform to develop on? I certainly think so.

2011-05-23

Kernel-space Politics

I thought of an interesting analogy the other day during an IRC conversation that I thought I'd share with you. :) It was probably almost a week ago (Blogger was down for maintenance at the time) and I forget exactly what the discussion was about, but more or less just how the members of government exploit the people for their own personal gain. In defense of privatizing as much as possible, I argued that the government is a little bit like the kernel of an operating system. If you want the system to be secure and reliable then you need to move as much as possible into userland (i.e., outside of the kernel). This way the individual processes (services, businesses, etc.) can attempt to do wrong, but ultimately the authority (kernel AKA government) has the power to stop them. On the other hand, if these processes are occurring directly within the kernel then they have unrestricted access to the entire system and can do whatever they want, even if they happen to corrupt or crash the system in doing so.

Then again, with the world's major currencies under private banking monopoly control you have to ask yourself whether the government is actually the kernel or only the system drivers. :-X The government can only hope to keep the system secure and reliable if it is actually ring 0 and free of corruption itself.

2011-05-17

.NET P/Invoke With Managed Callbacks

Update: See append at the bottom for an update.
Update2: Open sourced the bindings. See append2 at the bottom for an update.

I've finally thought of another topic to write about. Or, rather, the topic fell on my desk. Again. It is about using managed methods as callbacks in native code. I first learned about this months (a year?) ago when I was working on a ZBar.Processor proxy class for zbar-sharp (.NET bindings for the zbar bar code scanning library). This was my first real experience using P/Invoke.

The zbar_processor is effectively a high-level object to scan frames from a Web cam for bar codes and return the encoded data back to the calling application. The data is returned via a callback function. This seemed simple enough once I figured out how P/Invoke basically is done. I just wrote up a compatible method and passed it into the zbar API. While it did seem to work, the native library would eventually throw memory access violations.

It turns out that the problem was me (SHOCK). Or rather, my code. Or rather, missing code. See in .NET the equivalent of a function pointer is basically a delegate instance. This is effectively a method object (I imagine it to be similar to a functor in C++). In any case, calling the delegate instance is equivalent to calling the method. Even the this reference is preserved.

#digress Delegates In C#

A delegate type is declared like so:
namespace Application
{
    class Example
    {
        public delegate void foo(int bar);
    }
}
What this does is basically declare a delegate type named "foo" with no return value and a single int parameter. You can then create an instance of the delegate like so:
foo f = new foo(MyMethod);
Where MyMethod is a static or non-static method that matches the signature. The instantiation can also be implicit:

foo f = MyMethod;

For a more complete example:
using System;

namespace Application
{
    class Program
    {
        static void Main()
        {
            new Program().Run();
        }

        public void Run()
        {
            // Using a method, explicit instantiation.
            Example.foo f1 = new Example.foo(this.Foo);

            // Using a method, implicit instantiation.
            Example.foo f2 = this.Foo;

            // Using a lambda, explicit instantiation.
            Example.foo f3 = new Example.foo(
                    (int bar) => Console.WriteLine(bar));

            // Using a lambda, implicit instantiation.
            Example.foo f4 = (int bar) => Console.WriteLine(bar);

            f1(1);
            f2(2);
            f3(3);
            f4(4);
        }

        void Foo(int bar)
        {
            Console.WriteLine(bar);
        }
    }

    class Example
    {
        public delegate void foo(int bar);
    }
}

#enddigress

As I was saying before I so rudely interrupted me, I was missing a tiny little detail. In order to use one of these delegates as a callback in native code you'd first have to instantiate one. OK, first we actually need a native method to call, and a callback type.
namespace Application
{
    class Example
    {
        public class NativeMethods
        {
            public delegate void zbar_image_data_handler_t(
                    IntPtr image,
                    IntPtr userdata);

            [DllImport("libzbar0")]
            public static extern IntPtr zbar_processor_set_data_handler(
                    IntPtr processor,
                    zbar_image_data_handler_t handler,
                    IntPtr userdata);
        }
    }
}
Now in order to use it we simply do as above:
using System;

namespace Application
{
    class Program
    {
        static void Main()
        {
            new Program().Run();
        }

        public void Run()
        {
            IntPtr processor = IntPtr.Zero;

            /*
             * Use your imagination here. We would have had to call native
             * methods to create and initialize a zbar_processor struct.
             */
            
            var handler =
                    new Example.NativeMethods.zbar_image_data_handler_t(
                    this.ZBarProcessorImageDataHandler);

            Example.NativeMethods.zbar_processor_set_data_handler(
                    processor,
                    handler,
                    null);
        }

        void ZBarProcessorImageDataHandler(IntPtr image, IntPtr userdata)
        {
            Console.WriteLine("Bar code successfully scanned!");
        }
    }
}
Pretty straightforward, really, but there is a problem! Can you see it?! Not likely because it isn't even in the code. Remember that .NET is managed and garbage collected and delegates are basically objects. What this means is that when the garbage collector finds a delegate that is no longer referenced by anything it will eventually garbage collect it. This was something that certainly hadn't occurred to me when I was writing this code. It wasn't until I sought help online that somebody (it was many months ago, but I'm guessing in #csharp on freenode) pointed out that the delegate object could be destroyed by the time the zbar library attempted to call it. So what is the solution? We need to keep a reference to it around so it isn't destroyed. We can do that by simply creating a field to hold it.
...
    class Program
    {
        Example.NativeMethods.zbar_image_data_handler_t
                imageDataHandler_;

        ...

        public void Run()
        {
            IntPtr processor = IntPtr.Zero;

            /*
             * Use your imagination here. We would have had to call native
             * methods to create and initialize a zbar_processor struct.
             */
            
            this.imageDataHandler_ =
                    new Example.NativeMethods.zbar_image_data_handler_t(
                    this.ZBarProcessorImageDataHandler);

            Example.NativeMethods.zbar_processor_set_data_handler(
                    processor,
                    this.imageDataHandler_,
                    null);
        }

        ...
    }
...
At this point, after fixing a few other subtle bugs, the program ran fine without access violations! Unfortunately, the zbar_processor (at least, at the time) wasn't very reliable and we ultimately ended up implementing the Web cam interface ourselves and passing the scanned frames to zbar.

In any case, I've recently been writing .NET bindings for wkhtmltox, another native library. This one is for converting HTML/CSS Web pages to PDF documents. It works really well because it's based on the WebKit framework (hence, the "wk" in the name). So I forgot all about the lifetime of the callback delegates and ran into the same problem. Today I finally remembered and looked back at the zbar bindings to see what was wrong.

I've now fixed that by adding delegate fields to the appropriate proxy class, but unfortunately I'm STILL getting access violation exceptions. It's probably going to be some other subtle bug as a result of .NET speaking to native code. I haven't figured it out yet though.

If you have any ideas about what else would cause memory access violations in P/Invoke programs or have any questions about what I've shown then please leave a comment. :)

Append:

Seems there is more to the story than I thought. See the following thread for details:

http://www.gamedev.net/topic/270670-pinvoke-callback-access-violation/

The gist of it is that there is a System.Diagnostics.InteropServices.UnmanagedFunctionPointerAttribute attribute that can be used to mark up your callback delegates with P/Invoke attributes (for example, calling convention). I'm not sure why, but specifying CallingConvention.CDecl for my callback delegates seems to postpone or avoid the access violations. I don't really understand why because the functions of the library are being called with the default calling convention, which I think is CallingConvention.StdCall. Attempting to specify CallingConvention.CDecl for the native functions immediately causes the stack to become imbalanced/corrupted (according to the .NET runtime; I think I trust it in this case ;). It doesn't really make sense to me that the callbacks would use a different calling convention so I suspect that the different calling convention for the callbacks just coincidentally changes the way the memory is accessed to avoid violations... I don't know. I'm lost. :(

Append2:

I've turned the .NET bindings for wkhtmltox (wkhtmltopdf only) into an open source class library. :) You can find the code at its GitHub repository. It doesn't work properly yet (in fact, I've given up on it for now and am working on invoking the wkhtmltopdf.exe process instead), but I see no technical reason why it can't so I (or you) just need to figure out what's currently wrong. :)

2011-04-15

Fedora Package Contributor for Allegro

It's been a long time since I've posted anything on here and I guess it's long overdue. I've either lacked a good topic to type about or was just too drained to do so. Well no longer!

I figured I would take a moment to announce that I've become a package contributor for the Fedora project. What does that mean? It just means that I'm currently contributing to the development or maintenance of one or more of the RPM packages for the project. In particular, I'm maintaining the Allegro 5 packages. What this means is that I took the Allegro 5 source code and turned it into RPM packages. The main repository that I'm using to track Allegro 5 package development is available on GitHub: http://github.com/bamccaig/allegro5-rpm. Don't confuse me though with the actual Allegro developers who deserve the real praise for the Allegro library!

There are plans for me to help out with or take over the Allegro 4 packages as well (which I think will include games and other related projects). I've just recently gotten Allegro 5 adopted into Fedora 13 and 14 (15 AKA Alpha is pending) and 15. AFAIK, Fedora is the first distribution to get Allegro 5 packages! So yay for that! I'll now be working on updating the Allegro 4 packages (allegro) to the latest stable version, 4.4.x.y.

I guess that makes me an "official" contributor to Fedora. So that's pretty exciting. I think so, at least. What are you waiting for? Go put my packages in action! You can learn how to here if you don't already know.

Since there isn't much else to say about that, I'll take a few moments to discuss a few of the things I learned about working with RPMs. RPMs are based on a .spec file that basically just describes the package in a text file: name, description, version, release, where to get the source code, the software license, build instructions, which files to install, what their permissions should be, etc. To turn a .spec file into an SRPM[1] or RPM you use the rpmbuild tool. Once installed you can easily query the package system for information about a package using rpm or yum. For example:
$ # Lets query the basic information about the core allegro5 package.
$ yum info allegro5
*snip*
Name        : allegro5
Arch        : i686
Version     : 5.0.0
Release     : 3.fc13
Size        : 994 k
Repo        : installed
From repo   : updates
Summary     : A game programming library
URL         : http://liballeg.org/
License     : zlib
Description : Allegro is a cross-platform library intended for use in computer games
            : and other types of multimedia programming. Allegro 5 is the latest major
            : revision of the library, designed to take advantage of modern hardware
            : (e.g. hardware acceleration using 3D cards) and operating systems.
            : Although it is not backwards compatible with earlier versions, it still
            : occupies the same niche and retains a familiar style.

$ # Lets query the changelog of the core allegro5 package.
$ rpm -q --changelog allegro5
* Wed Mar 09 2011 Brandon McCaig  5.0.0-3
- Adding file permissions to subpackages.
- Moving devel files (namely .so symlinks) to devel packages.
- Added %doc section proper; readmes, changes, license, etc.
- Fixed SF.net URI.
- Modified BuildRequires.
- Added main devel dependency to subpackage devels.
- Replaced many al_*.3* manpage files with a glob.
- Replaced many header files with directory and %exclude macros.
- Added allegro5.cfg file under /etc/allegro5rc.

* Fri Mar 04 2011 Brandon McCaig  5.0.0-2
- Merged primitives addon packages into core packages.
- Merged memfile addon packages into core packages.
- Merged "main" addon packages into core packages.
- Merged font packages into core packages.
- Merged color packages into core packages.
- Merged doc package into the devel package.
- Fixed spelling mistakes.
- Removed explicit library dependencies.

* Fri Feb 25 2011 Brandon McCaig  5.0.0-1
- Initial version.

$ 
I was actually quite happy to learn just how easy it is to create RPMs that work. RPM seems to be very smart and figures out a lot of the dependencies itself. Contributing simple packages really isn't very difficult if you already know a thing or two about how to build them from source (programming experience helps too). So if you're a programmer that uses Fedora (or any other Linux distribution) and wants to help out by becoming a packager then I suggest you look into it. It's not as difficult as you might think.

Thanks to my sponsor[2] for guiding me through the process and helping me out when I got stuck.

References

[1] An SRPM is a source RPM. It basically is just an RPM that contains the .spec file, source tarball, and any patches that might be bundled. The SRPM is an intermediary stage between the source code and .spec and the fully-fledged RPM.
[2] I'll refrain from naming him without his permission to do so, just in case, but you can probably find out who it is easily enough by checking out the fedora bugzilla system if you care. :) He knows who he is. :)

2010-07-06

SQshelL - Building and Installing sqsh In Linux (Fedora 13) Without The Sybase CT-Library

If you're like me then you generally prefer fast, efficient command-line interfaces to expensive (CPU- and RAM-wise) and slow GUI interfaces. Recently, my Windows 7 (Seven) box got really slow, and in particular Visual Studio and SQL Server began to crawl. I couldn't really explain it because the Windows Task Manager said that the System Idle Process was using all of the CPU and the system was only using about half of my 3 GB of RAM, but nevertheless, scrolling a single line of code took seconds! Something had to be done.

I had finally had enough and insisted to j0rb that I was going to install Linux. Now, I currently still require Windows because most of our projects are .NET-based (and bleeding edge, .NET 4.0 stuff), and it seems Mono just isn't quite there yet. I was very pleased to hear that they do support many (or most?) .NET 4.0 features, but apparently not LINQ to SQL, which we make extensive use of. To solve this contradiction, I would install Windows in a virtual machine. I ended up installing Fedora 13 (x86) as the host, with Windows 7 (Seven) Ultimate in a VirtualBox guest. I was quite nervous about whether or not it would actually perform adequately (especially considering how poorly Visual Studio and SQL Server were performing on the raw metal), but so far so good.

On to the topic at hand: building and installing sqsh in Fedora 13. sqsh is basically a shell interface for Sybase ASE (I guess?) or SQL Server. It allows you to do powerful things that you expect from your system shell, like redirecting streams, and environment variables, etc.; as well as interact with a database in SQL. I'm not overly familiar yet with what it can or can't do (I just heard about it and went ahead and installed it), but my hopes are high. Hopefully it will at least reduce the need for me to wait for SQL Server to startup and close, or ugh connect.

Naturally, when I first heard of it I instinctively asked YUM for it. Sadly, I was disappointed to learn that it is not in the default repositories. "No matter! I will just install from source," I told myself.

The first step to installing from source is obviously to get the source. So head on over to SourceForge, where it's currently being hosted, and download the latest version (I'm using 2.1.7 at the time of writing).

http://sourceforge.net/projects/sqsh/

Assuming your results are similar to mine, you should now have a sqsh-<version>.tar.gz file somewhere on your file system. Firefox downloaded it straight to my Downloads folder (dl for short). I keep all source trees in ~/src for organization, so lets go ahead and extract it to there now (you can extract it where ever you like).
[bamccaig@krypton ~]$ tar -xzf dl/sqsh-2.1.7.tar.gz -C src
[bamccaig@krypton ~]$ cd src/sqsh-2.1.7
[bamccaig@krypton sqsh-2.1.7]$ 
Great. So far so good. Now lets take a look for "readme" files that we should read before proceeding (which are typically named in all uppercase):
[bamccaig@krypton sqsh-2.1.7]$ ls -1 | egrep '^[[:upper:]]+$'
AUTHORS
COPYING
INSTALL
README
[bamccaig@krypton sqsh-2.1.7]$ 
Excellent! There is a README and most importantly an INSTALL! Go ahead and read through both of those files now. I'll wait.

... ... ...

OK, now that we've read those two files, we should all be aware of the dependencies. Basically you need Sybase CT-Library OR you need FreeTDS. This threw me for a loop at first because when I reached the Sybase CT-Library dependency it was worded in such a way that made it sound absolutely required. If you go on to read the next dependency, you'll see that you can alternatively use FreeTDS. It sounds like there is actually a couple of freely available Sybase downloads for Linux (free as in beer, not as in speech), but I'm not entirely sure which, if any, contain the Sybase CT-Library and worse you're required to register with Sybase to download them (ugh). I was pleased to discover that doing so is unnecessary.

Instead, I say again, you can use FreeTDS; and so that's what we're going to do. Lets see if we can find it.
[bamccaig@krypton sqsh-2.1.7]$ yum search freetds | grep '^freetds'
freetds-devel.i686 : Header files and development libraries for freetds
freetds-doc.i686 : Development documentation for freetds
freetds.i686 : Implementation of the TDS (Tabular DataStream) protocol
[bamccaig@krypton sqsh-2.1.7]$ 
That is in the default repositories so lets go ahead and install it (including the development files; and WTH, the documentation too).
[bamccaig@krypton sqsh-2.1.7]$ su -c 'yum install freetds freetds-devel freetds-doc'
Password:
[Hopefully YUM spits out happy here]
[bamccaig@krypton sqsh-2.1.7]$ 
With that out of the way, we're almost ready to build. Taking another look at the INSTALL file, we need to specify the base SYBASE directory in the SYBASE environment variable. I know what you're thinking. I know I was. I don't have Sybase! This doesn't apply to me! Not so fast. It turns out that it does apply to you (and I).[1] We instead need to point the SYBASE environment variable at the base directory for FreeTDS.

What this means is that $SYBASE/include and $SYBASE/lib should tell our build process where to find the necessary header and library files. So where are those files installed for FreeTDS? You can make an educated guess or you can ask rpm itself. I actually didn't know about this until now. It's hopefully going to make things a lot easier in the future.
[bamccaig@krypton sqsh-2.1.7]$ rpm -ql freetds freetds-devel | \
        egrep '/(include|lib)' | sort
/usr/include/bkpublic.h
/usr/include/cspublic.h
/usr/include/cstypes.h
/usr/include/ctpublic.h
/usr/include/sqldb.h
/usr/include/sqlfront.h
/usr/include/sybdb.h
/usr/include/syberror.h
/usr/include/sybfront.h
/usr/include/tdsconvert.h
/usr/include/tds.h
/usr/include/tds_sysdep_public_32.h
/usr/include/tds_sysdep_public.h
/usr/include/tdsver.h
/usr/lib/libct.so
/usr/lib/libct.so.4
/usr/lib/libct.so.4.0.0
/usr/lib/libsybdb.so
/usr/lib/libsybdb.so.5
/usr/lib/libsybdb.so.5.0.0
/usr/lib/libtds-0.82.so
/usr/lib/libtdsodbc.so
/usr/lib/libtdsodbc.so.0
/usr/lib/libtdsodbc.so.0.0.0
/usr/lib/libtds.so
[bamccaig@krypton sqsh-2.1.7]$ 
You can check the man or info page(s) for details, but essentially what we did is query the package manager to list the files that were installed; then we filtered out the results we weren't interested in (since we're only interested in files within an include or lib directory, I greped for that within the path). For good measure, I also sorted the results.

As you can see, FreeTDS is installed in /usr. No real surprise there. OK, so we need to set SYBASE to /usr.
[bamccaig@krypton sqsh-2.1.7]$ export SYBASE=/usr
[bamccaig@krypton sqsh-2.1.7]$ echo $SYBASE
/usr
[bamccaig@krypton sqsh-2.1.7]$ 
And now we're finally ready to start the build process. sqsh uses the "GNU auto-configuration package" as so many open source projects seem to do, so it uses the rather standard ./configure, make, make install build instructions. If you want readline support (and most people probably do) then you'll probably want to add the --with-readline option while configuring (you'll obviously need to have readline installed to do this though). Review the INSTALL file for more options.
[bamccaig@krypton sqsh-2.1.7]$ ./configure --with-readline
[Lots of output... Should be happy if your environment is complete...]
[bamccaig@krypton sqsh-2.1.7]$ make
[Everything is going good, the project compiles, but then all of a sudden...the linker is angry!]
gcc    -L/usr/lib  cmd_alias.o cmd_bcp.o cmd_buf.o cmd_connect.o cmd_do.o cmd_echo.o cmd_exit.o cmd_for.o cmd_func.o cmd_go.o cmd_help.o cmd_history.o cmd_if.o cmd_input.o cmd_jobs.o cmd_kill.o cmd_lock.o cmd_loop.o cmd_misc.o cmd_read.o cmd_reconnect.o cmd_redraw.o cmd_reset.o cmd_return.o cmd_rpc.o cmd_set.o cmd_shell.o cmd_show.o cmd_sleep.o cmd_wait.o cmd_warranty.o cmd_while.o var_ctlib.o var_date.o var_debug.o var_dsp.o var_hist.o var_misc.o var_passwd.o var_readline.o var_thresh.o dsp.o dsp_bcp.o dsp_csv.o dsp_conv.o dsp_desc.o dsp_horiz.o dsp_html.o dsp_meta.o dsp_none.o dsp_out.o dsp_pretty.o dsp_vert.o dsp_x.o sqsh_alias.o sqsh_args.o sqsh_avl.o sqsh_buf.o sqsh_cmd.o sqsh_compat.o sqsh_ctx.o sqsh_debug.o sqsh_env.o sqsh_error.o sqsh_expand.o sqsh_fd.o sqsh_filter.o sqsh_fork.o sqsh_func.o sqsh_getopt.o sqsh_global.o sqsh_history.o sqsh_init.o sqsh_job.o sqsh_readline.o sqsh_sig.o sqsh_sigcld.o sqsh_stdin.o sqsh_strchr.o sqsh_tok.o sqsh_varbuf.o sqsh_main.o -ldl -lm   -lreadline -lcurses  -o sqsh
cmd_bcp.o: In function `bcp_signal':
/home/bamccaig/src/sqsh-2.1.7/src/cmd_bcp.c:1164: undefined reference to `ct_cancel'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_bcp.c:1166: undefined reference to `ct_cancel'
cmd_bcp.o: In function `cmd_bcp':
/home/bamccaig/src/sqsh-2.1.7/src/cmd_bcp.c:337: undefined reference to `ct_cmd_alloc'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_bcp.c:877: undefined reference to `ct_con_props'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_bcp.c:894: undefined reference to `blk_done'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_bcp.c:898: undefined reference to `ct_cancel'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_bcp.c:901: undefined reference to `ct_close'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_bcp.c:904: undefined reference to `ct_con_drop'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_bcp.c:910: undefined reference to `ct_con_props'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_bcp.c:924: undefined reference to `ct_cancel'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_bcp.c:942: undefined reference to `ct_cmd_drop'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_bcp.c:947: undefined reference to `blk_drop'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_bcp.c:952: undefined reference to `ct_close'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_bcp.c:953: undefined reference to `ct_con_drop'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_bcp.c:957: undefined reference to `cs_loc_drop'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_bcp.c:395: undefined reference to `ct_command'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_bcp.c:406: undefined reference to `ct_send'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_bcp.c:865: undefined reference to `blk_done'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_bcp.c:866: undefined reference to `ct_cancel'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_bcp.c:867: undefined reference to `ct_cancel'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_bcp.c:422: undefined reference to `ct_con_alloc'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_bcp.c:429: undefined reference to `ct_callback'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_bcp.c:438: undefined reference to `ct_callback'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_bcp.c:447: undefined reference to `ct_con_props'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_bcp.c:460: undefined reference to `ct_con_props'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_bcp.c:475: undefined reference to `ct_con_props'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_bcp.c:488: undefined reference to `ct_con_props'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_bcp.c:521: undefined reference to `ct_con_props'
cmd_bcp.o:/home/bamccaig/src/sqsh-2.1.7/src/cmd_bcp.c:503: more undefined references to `ct_con_props' follow
cmd_bcp.o: In function `cmd_bcp':
/home/bamccaig/src/sqsh-2.1.7/src/cmd_bcp.c:558: undefined reference to `cs_loc_alloc'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_bcp.c:539: undefined reference to `ct_con_props'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_bcp.c:566: undefined reference to `cs_locale'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_bcp.c:600: undefined reference to `cs_locale'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_bcp.c:582: undefined reference to `cs_locale'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_bcp.c:617: undefined reference to `ct_con_props'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_bcp.c:630: undefined reference to `ct_connect'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_bcp.c:672: undefined reference to `blk_alloc'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_bcp.c:703: undefined reference to `blk_init'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_bcp.c:685: undefined reference to `blk_props'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_bcp.c:722: undefined reference to `ct_results'
cmd_bcp.o: In function `bcp_data_bind':
/home/bamccaig/src/sqsh-2.1.7/src/cmd_bcp.c:973: undefined reference to `ct_res_info'
cmd_bcp.o: In function `cmd_bcp':
/home/bamccaig/src/sqsh-2.1.7/src/cmd_bcp.c:818: undefined reference to `blk_done'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_bcp.c:837: undefined reference to `blk_done'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_bcp.c:797: undefined reference to `ct_fetch'
cmd_bcp.o: In function `bcp_data_xfer':
/home/bamccaig/src/sqsh-2.1.7/src/cmd_bcp.c:1060: undefined reference to `ct_fetch'
cmd_bcp.o: In function `cmd_bcp':
/home/bamccaig/src/sqsh-2.1.7/src/cmd_bcp.c:769: undefined reference to `blk_done'
cmd_bcp.o: In function `bcp_data_bind':
/home/bamccaig/src/sqsh-2.1.7/src/cmd_bcp.c:1013: undefined reference to `ct_describe'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_bcp.c:1031: undefined reference to `ct_bind'
cmd_bcp.o: In function `bcp_data_xfer':
/home/bamccaig/src/sqsh-2.1.7/src/cmd_bcp.c:1096: undefined reference to `blk_bind'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_bcp.c:1112: undefined reference to `blk_rowxfer'
cmd_connect.o: In function `cmd_connect':
/home/bamccaig/src/sqsh-2.1.7/src/cmd_connect.c:719: undefined reference to `ct_con_alloc'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_connect.c:1172: undefined reference to `ct_con_props'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_connect.c:1187: undefined reference to `ct_close'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_connect.c:1196: undefined reference to `ct_con_drop'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_connect.c:1202: undefined reference to `cs_loc_drop'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_connect.c:1208: undefined reference to `ct_exit'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_connect.c:1209: undefined reference to `cs_ctx_drop'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_connect.c:769: undefined reference to `ct_con_props'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_connect.c:814: undefined reference to `ct_con_props'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_connect.c:865: undefined reference to `cs_loc_alloc'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_connect.c:869: undefined reference to `cs_locale'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_connect.c:908: undefined reference to `ct_con_props'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_connect.c:925: undefined reference to `ct_con_props'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_connect.c:949: undefined reference to `ct_connect'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_connect.c:978: undefined reference to `ct_con_props'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_connect.c:567: undefined reference to `cs_ctx_alloc'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_connect.c:597: undefined reference to `cs_config'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_connect.c:725: undefined reference to `ct_con_props'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_connect.c:573: undefined reference to `cs_ctx_alloc'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_connect.c:579: undefined reference to `cs_ctx_alloc'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_connect.c:585: undefined reference to `cs_ctx_alloc'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_connect.c:591: undefined reference to `cs_ctx_alloc'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_connect.c:782: undefined reference to `ct_con_props'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_connect.c:610: undefined reference to `ct_init'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_connect.c:613: undefined reference to `ct_callback'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_connect.c:621: undefined reference to `ct_callback'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_connect.c:635: undefined reference to `ct_config'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_connect.c:702: undefined reference to `ct_config'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_connect.c:651: undefined reference to `ct_config'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_connect.c:668: undefined reference to `ct_config'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_connect.c:822: undefined reference to `ct_con_props'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_connect.c:836: undefined reference to `ct_con_props'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_connect.c:850: undefined reference to `ct_con_props'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_connect.c:882: undefined reference to `cs_locale'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_connect.c:1129: undefined reference to `cs_loc_drop'
cmd_connect.o: In function `check_opt_capability':
/home/bamccaig/src/sqsh-2.1.7/src/cmd_connect.c:1251: undefined reference to `ct_capability'
cmd_connect.o: In function `cmd_connect':
/home/bamccaig/src/sqsh-2.1.7/src/cmd_connect.c:1137: undefined reference to `ct_options'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_connect.c:896: undefined reference to `cs_locale'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_connect.c:1037: undefined reference to `ct_con_props'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_connect.c:1089: undefined reference to `ct_cmd_alloc'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_connect.c:1094: undefined reference to `ct_command'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_connect.c:1115: undefined reference to `ct_cmd_drop'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_connect.c:1105: undefined reference to `ct_send'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_connect.c:1111: undefined reference to `ct_results'
cmd_connect.o: In function `syb_client_cb':
/home/bamccaig/src/sqsh-2.1.7/src/cmd_connect.c:1571: undefined reference to `ct_con_props'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_connect.c:1580: undefined reference to `ct_cancel'
cmd_do.o: In function `cmd_do_sigint_cancel':
/home/bamccaig/src/sqsh-2.1.7/src/cmd_do.c:677: undefined reference to `ct_cancel'
cmd_do.o: In function `cmd_do_exec':
/home/bamccaig/src/sqsh-2.1.7/src/cmd_do.c:304: undefined reference to `ct_cmd_alloc'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_do.c:312: undefined reference to `ct_command'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_do.c:327: undefined reference to `ct_cmd_drop'
cmd_do.o: In function `cmd_do':
/home/bamccaig/src/sqsh-2.1.7/src/cmd_do.c:260: undefined reference to `ct_close'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_do.c:261: undefined reference to `ct_con_drop'
cmd_do.o: In function `cmd_do_exec':
/home/bamccaig/src/sqsh-2.1.7/src/cmd_do.c:322: undefined reference to `ct_send'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_do.c:334: undefined reference to `ct_results'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_do.c:434: undefined reference to `ct_cancel'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_do.c:435: undefined reference to `ct_cmd_drop'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_do.c:351: undefined reference to `ct_fetch'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_do.c:446: undefined reference to `ct_cancel'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_do.c:447: undefined reference to `ct_cmd_drop'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_do.c:423: undefined reference to `ct_cancel'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_do.c:424: undefined reference to `ct_cmd_drop'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_do.c:464: undefined reference to `ct_cancel'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_do.c:465: undefined reference to `ct_cmd_drop'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_do.c:381: undefined reference to `ct_cancel'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_do.c:382: undefined reference to `ct_cmd_drop'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_do.c:342: undefined reference to `ct_cancel'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_do.c:343: undefined reference to `ct_cmd_drop'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_do.c:469: undefined reference to `ct_cmd_drop'
cmd_go.o: In function `cmd_go':
/home/bamccaig/src/sqsh-2.1.7/src/cmd_go.c:468: undefined reference to `ct_cmd_alloc'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_go.c:475: undefined reference to `ct_command'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_go.c:511: undefined reference to `ct_cmd_drop'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_go.c:481: undefined reference to `ct_cmd_drop'
cmd_reconnect.o: In function `cmd_reconnect':
/home/bamccaig/src/sqsh-2.1.7/src/cmd_reconnect.c:58: undefined reference to `ct_close'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_reconnect.c:59: undefined reference to `ct_con_drop'
cmd_rpc.o: In function `rpc_param':
/home/bamccaig/src/sqsh-2.1.7/src/cmd_rpc.c:559: undefined reference to `cs_convert'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_rpc.c:573: undefined reference to `ct_param'
cmd_rpc.o: In function `cmd_rpc':
/home/bamccaig/src/sqsh-2.1.7/src/cmd_rpc.c:228: undefined reference to `ct_cmd_alloc'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_rpc.c:441: undefined reference to `ct_cancel'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_rpc.c:476: undefined reference to `ct_cmd_drop'
/home/bamccaig/src/sqsh-2.1.7/src/cmd_rpc.c:279: undefined reference to `ct_command'
var_ctlib.o: In function `var_set_interfaces':
/home/bamccaig/src/sqsh-2.1.7/src/var_ctlib.c:82: undefined reference to `ct_config'
dsp.o: In function `dsp_cmd':
/home/bamccaig/src/sqsh-2.1.7/src/dsp.c:140: undefined reference to `ct_cmd_props'
/home/bamccaig/src/sqsh-2.1.7/src/dsp.c:181: undefined reference to `ct_send'
/home/bamccaig/src/sqsh-2.1.7/src/dsp.c:248: undefined reference to `ct_cancel'
dsp.o: In function `dsp_signal':
/home/bamccaig/src/sqsh-2.1.7/src/dsp.c:770: undefined reference to `ct_cancel'
dsp_bcp.o: In function `dsp_bcp':
/home/bamccaig/src/sqsh-2.1.7/src/dsp_bcp.c:65: undefined reference to `ct_results'
/home/bamccaig/src/sqsh-2.1.7/src/dsp_bcp.c:94: undefined reference to `ct_fetch'
/home/bamccaig/src/sqsh-2.1.7/src/dsp_bcp.c:76: undefined reference to `ct_fetch'
dsp_csv.o: In function `dsp_csv':
/home/bamccaig/src/sqsh-2.1.7/src/dsp_csv.c:66: undefined reference to `ct_results'
/home/bamccaig/src/sqsh-2.1.7/src/dsp_csv.c:95: undefined reference to `ct_fetch'
/home/bamccaig/src/sqsh-2.1.7/src/dsp_csv.c:77: undefined reference to `ct_fetch'
dsp_conv.o: In function `dsp_type_len':
/home/bamccaig/src/sqsh-2.1.7/src/dsp_conv.c:505: undefined reference to `cs_convert'
/home/bamccaig/src/sqsh-2.1.7/src/dsp_conv.c:520: undefined reference to `cs_convert'
dsp_conv.o: In function `dsp_datetime_conv':
/home/bamccaig/src/sqsh-2.1.7/src/dsp_conv.c:361: undefined reference to `cs_dt_crack'
/home/bamccaig/src/sqsh-2.1.7/src/dsp_conv.c:341: undefined reference to `cs_convert'
dsp_desc.o: In function `dsp_desc_fetch':
/home/bamccaig/src/sqsh-2.1.7/src/dsp_desc.c:434: undefined reference to `ct_fetch'
/home/bamccaig/src/sqsh-2.1.7/src/dsp_desc.c:553: undefined reference to `cs_convert'
/home/bamccaig/src/sqsh-2.1.7/src/dsp_desc.c:498: undefined reference to `cs_convert'
dsp_desc.o: In function `dsp_desc_bind':
/home/bamccaig/src/sqsh-2.1.7/src/dsp_desc.c:86: undefined reference to `ct_res_info'
/home/bamccaig/src/sqsh-2.1.7/src/dsp_desc.c:186: undefined reference to `ct_describe'
/home/bamccaig/src/sqsh-2.1.7/src/dsp_desc.c:369: undefined reference to `ct_bind'
/home/bamccaig/src/sqsh-2.1.7/src/dsp_desc.c:296: undefined reference to `ct_bind'
/home/bamccaig/src/sqsh-2.1.7/src/dsp_desc.c:392: undefined reference to `ct_compute_info'
/home/bamccaig/src/sqsh-2.1.7/src/dsp_desc.c:406: undefined reference to `ct_compute_info'
/home/bamccaig/src/sqsh-2.1.7/src/dsp_desc.c:136: undefined reference to `ct_compute_info'
/home/bamccaig/src/sqsh-2.1.7/src/dsp_desc.c:163: undefined reference to `ct_compute_info'
dsp_horiz.o: In function `dsp_horiz':
/home/bamccaig/src/sqsh-2.1.7/src/dsp_horiz.c:97: undefined reference to `ct_results'
/home/bamccaig/src/sqsh-2.1.7/src/dsp_horiz.c:140: undefined reference to `ct_res_info'
/home/bamccaig/src/sqsh-2.1.7/src/dsp_horiz.c:154: undefined reference to `ct_fetch'
/home/bamccaig/src/sqsh-2.1.7/src/dsp_horiz.c:163: undefined reference to `ct_get_data'
/home/bamccaig/src/sqsh-2.1.7/src/dsp_horiz.c:217: undefined reference to `ct_fetch'
dsp_html.o: In function `dsp_html':
/home/bamccaig/src/sqsh-2.1.7/src/dsp_html.c:84: undefined reference to `ct_results'
/home/bamccaig/src/sqsh-2.1.7/src/dsp_html.c:115: undefined reference to `ct_res_info'
/home/bamccaig/src/sqsh-2.1.7/src/dsp_html.c:135: undefined reference to `ct_fetch'
/home/bamccaig/src/sqsh-2.1.7/src/dsp_html.c:144: undefined reference to `ct_get_data'
/home/bamccaig/src/sqsh-2.1.7/src/dsp_html.c:187: undefined reference to `ct_fetch'
dsp_meta.o: In function `dsp_meta_int_prop':
/home/bamccaig/src/sqsh-2.1.7/src/dsp_meta.c:613: undefined reference to `ct_res_info'
dsp_meta.o: In function `dsp_meta_transtate':
/home/bamccaig/src/sqsh-2.1.7/src/dsp_meta.c:640: undefined reference to `ct_res_info'
dsp_meta.o: In function `dsp_meta_desc':
/home/bamccaig/src/sqsh-2.1.7/src/dsp_meta.c:206: undefined reference to `ct_res_info'
/home/bamccaig/src/sqsh-2.1.7/src/dsp_meta.c:216: undefined reference to `ct_describe'
dsp_meta.o: In function `dsp_meta_fetch':
/home/bamccaig/src/sqsh-2.1.7/src/dsp_meta.c:695: undefined reference to `ct_fetch'
dsp_meta.o: In function `dsp_meta':
/home/bamccaig/src/sqsh-2.1.7/src/dsp_meta.c:65: undefined reference to `ct_results'
dsp_meta.o: In function `dsp_meta_bool_prop':
/home/bamccaig/src/sqsh-2.1.7/src/dsp_meta.c:584: undefined reference to `ct_res_info'
dsp_none.o: In function `dsp_none':
/home/bamccaig/src/sqsh-2.1.7/src/dsp_none.c:54: undefined reference to `ct_results'
/home/bamccaig/src/sqsh-2.1.7/src/dsp_none.c:65: undefined reference to `ct_fetch'
dsp_pretty.o: In function `dsp_pretty':
/home/bamccaig/src/sqsh-2.1.7/src/dsp_pretty.c:87: undefined reference to `ct_results'
/home/bamccaig/src/sqsh-2.1.7/src/dsp_pretty.c:127: undefined reference to `ct_res_info'
/home/bamccaig/src/sqsh-2.1.7/src/dsp_pretty.c:141: undefined reference to `ct_fetch'
/home/bamccaig/src/sqsh-2.1.7/src/dsp_pretty.c:154: undefined reference to `ct_get_data'
/home/bamccaig/src/sqsh-2.1.7/src/dsp_pretty.c:208: undefined reference to `ct_fetch'
dsp_vert.o: In function `dsp_vert':
/home/bamccaig/src/sqsh-2.1.7/src/dsp_vert.c:79: undefined reference to `ct_results'
/home/bamccaig/src/sqsh-2.1.7/src/dsp_vert.c:106: undefined reference to `ct_res_info'
/home/bamccaig/src/sqsh-2.1.7/src/dsp_vert.c:121: undefined reference to `ct_fetch'
/home/bamccaig/src/sqsh-2.1.7/src/dsp_vert.c:127: undefined reference to `ct_get_data'
/home/bamccaig/src/sqsh-2.1.7/src/dsp_vert.c:305: undefined reference to `ct_fetch'
sqsh_ctx.o: In function `sqsh_ctx_pop':
/home/bamccaig/src/sqsh-2.1.7/src/sqsh_ctx.c:104: undefined reference to `ct_con_props'
/home/bamccaig/src/sqsh-2.1.7/src/sqsh_ctx.c:113: undefined reference to `ct_close'
/home/bamccaig/src/sqsh-2.1.7/src/sqsh_ctx.c:116: undefined reference to `ct_con_drop'
sqsh_init.o: In function `sqsh_exit':
/home/bamccaig/src/sqsh-2.1.7/src/sqsh_init.c:315: undefined reference to `ct_close'
/home/bamccaig/src/sqsh-2.1.7/src/sqsh_init.c:316: undefined reference to `ct_con_drop'
/home/bamccaig/src/sqsh-2.1.7/src/sqsh_init.c:322: undefined reference to `ct_exit'
/home/bamccaig/src/sqsh-2.1.7/src/sqsh_init.c:337: undefined reference to `cs_ctx_drop'
collect2: ld returned 1 exit status
make[1]: *** [sqsh] Error 1
make[1]: Leaving directory `/home/bamccaig/src/sqsh-2.1.7/src'
make: *** [build-subdirs] Error 2
[bamccaig@krypton sqsh-2.1.7]$ 
VERY angry. Obviously, we're missing a library. ct_blah suggested to me that it was one or more Sybase CT-Library libraries that it couldn't find. Since we're using FreeTDS in place of it, I went back to our friend rpm and with the help of grep and sed got the linking options that I was looking for.
[bamccaig@krypton sqsh-2.1.7]$ rpm -ql freetds freetds-devel | \
        grep '^/usr/lib/' | \
        sed -re 's#/usr/lib/lib##' -e 's/([^\.-]*).*/\1/' -e 's/(.*)/-l\1/' | \
        sort | xargs
-lct -lct -lct -lsybdb -lsybdb -lsybdb -ltds -ltds -ltdsodbc -ltdsodbc -ltdsodbc
[bamccaig@krypton sqsh-2.1.7]$ 
For those of you that don't speak UNIX so well yet, that basically means:
  1. List all the files that were installed by the freetds and freetds-devel packages (it almost certainly would suffice to just check freetds-devel, but it doesn't hurt to include freetds in the search).
  2. Filter out all options that don't begin with /usr/lib (since we're looking for libraries and we know they are installed in /usr/lib).
  3. Remove the directory (/usr/lib) and prefix (lib) from each library (most shared libraries are named something like libfoo.so).
  4. Remove any trailing characters (characters that are after a dot or dash, since most shared libraries have version information appended to their name, resulting in something like libfoo.so.1.2.3, for example; those are often symlinked by friendlier names, however, and the linker will do the work of figuring out what our pretty names mean).
  5. Prefix each library with -l, since that is the option that GCC uses to specify libraries.
  6. Finally, sort them for easy reading and join them onto a single line.
Reviewing the command line in the above error message, we can see that sqsh isn't being linked with any of them! That has to be problem! The h4x solution then is to edit the Makefile in question and add those to the command.

Upon opening that file, I found a handy SYBASE_LIBS variable that seemed rather relevant. Above it is even a handy comment suggesting that users of some platforms might need to manually edit that line. Open that file in your favorite editor...
[bamccaig@krypton sqsh-2.1.7]$ vim src/Makefile
...and replace that line with the results from our earlier query:
SYBASE_LIBS = -lct -lct -lct -lsybdb -lsybdb -lsybdb -ltds -ltds -ltdsodbc -ltdsodbc -ltdsodbc
Save the file and try to build again.
[bamccaig@krypton sqsh-2.1.7]$ make
[Now it should almost certainly be happy unless the world just hates you...]
[bamccaig@krypton sqsh-2.1.7]$ su -c 'make install && make install.man'
Password:
[Installation should go happily too]
[bamccaig@krypton sqsh-2.1.7]$ 
If you can run sqsh now (for example, sqsh --help to test) then the build and installation should have been successful! Congratulations! Now you (and I) just have to learn to use it. :-X

To get you started, the following allows me to connect to the SQL Server located on our LAN, running on a Windows server (**shudder**).
[bamccaig@krypton ~]$ sqsh -S <server_ip> -U <username>
[Program information]
Password: 
[I am now inside of a sqsh shell!]
> quit
[bamccaig@krypton ~]$ 
Obviously, not the most convenient way to connect, typing the server, username, and password (!) each time. Surely there must be handy configuration options to make things smoother. For now, I have created shortcut scripts (readable/writeable/executable only by me) that do the connecting for me to save me the trouble.
References

1. Thanks to the maintainer for clearing this up via E-mail.

2010-05-31

Wasabi - Apparently ASP/VBScript Isn't Evil Enough

I've been looking for "project management" software (I know, yuck) for some time. Mainly because I want to incorporate bug/issue tracking, discussion boards, and wikis into j0rb (and in personal projects, etc.). Firstly, it's very difficult to keep track of bugs by hand and often you forget about them before you fix them. When it comes to software development there's just too many issues, whether bugs, design decisions, or potential hiccups; to keep track of. I also want some persistent medium to discuss software design with colleagues. At j0rb we currently discuss things either through instant messaging or we talk in person.

The problem with instant messaging is that there's no organization to it. If you forget what was discussed or what conclusions everyone came to then you have to search through chat logs and hope to find it. If you do find it, you have to sort through intertwined discussions to find the relevant material. It's also hard to have meaningful discussions because instant messengers are generally designed to send single sentences or short paragraphs. It's hard to communicate code ideas or complex ideas that require more space to type. The problem with talking in person is that there's no log at all. There's no searching it to refresh your memory, etc.

Open source projects have gotten by with mailing lists for a long time and it seems the most experienced still prefer them to discussion boards. I figured there must be good reason for this so I've personally taken a liking to them. However, I don't think mailing lists would work well at j0rb. It's a Windows-based shop and I generally hate Windows-based mail clients for good reason. I generally use Gmail's Web interface for personal mail, which works quite well for most E-mail needs, but I wouldn't be allowed to discuss company topics through a remotely hosted service. At least, not a free one with no guarantees about security or privacy.

I'd be happy to try mutt from Cygwin against a locally hosted server, but odds are that my colleagues would be sending HTML E-mails and wouldn't understand why that was a problem (nor why I'd choose to use a plain-text client). For these reasons, I don't think a mailing list would work particularly well at j0rb. Colleague incompatibility. :P A discussion board seems to be the next best thing (with the advantage of edits to correct mistakes, etc.).

Wikis are great for keeping track of a growing and dynamic knowledge base of information. I think they'd work well for documenting certain gotchas discovered in languages, APIs, and platforms, etc.; as well as our own software's gotchas.

Anyway, I've recently been reading Joel on Software. There are a lot of really good articles that make a lot of sense. It seems Joel Spolsky is very experienced in project management and the like. The allure of fully defined specifications and feasible schedules got me interested in taking a look at FogBugz, some project management software developed by Joel's company, Fog Creek Software. I generally avoid commercial software, since as a general rule it's garbage, but I am willing to appreciate some commercial offerings (everything from Microsoft not being among them *cough*).

Anyway, it sounded like FogBugz was a pretty complete solution and I had hoped that it was better than what we have now (which can't do any of the things I mentioned above). I suggested it as an option to management, but the price seems kind of steep. Fortunately, they offer the hosted service for free to individuals so I decided to sign up for personal projects so I could get a look at just what it offered. I was pretty disappointed. At least at first glance, the UI seems rather bloated and disorganized. I'm having a hard time trying to figure out where different types of information are organized. It seems all it really tracks are "cases", which can be bugs, issues, or what have you. Then they use "filters" to determine which cases you see, by tons of criteria. This seems to be the way to show cases for a particular project. It seems awkward that way, but maybe that's just because I'm not used to it.

I decided to ask the Interwebz what it thought of FogBugz. I started where I usually start: at Wikipedia. It was there that I discovered, much to my dismay, Thistle and Wasabi.

It seems that FogBugz was originally written in Classic ASP/VBScript. I can sympathize because our main software project is mostly written in the same. What appears like an OK language from the surface is surely not. Visual Basic is bad enough, but VBScript is like a stripped down Visual Basic with half of the features missing or mutilated. Those that have worked with VBScript for any considerable period of time know that it is the decay-er of sanity. It has numerous limitations and has a weak "standard library" and most non-trivial functionality comes from server "components" that do not appear to be native VBScript at all (I've always assumed they were DLLs written in C or C++, but I suppose it's possible that they're actually written in pure evil). VBScript can't do much without these components. You apparently need these components for things like handling file uploads or connecting to databases. Things that other languages, like PHP, Python, and Perl can do "themselves". Probably because those languages are "open".

Apparently, Fog Creek Software wanted FogBugz to run on Linux servers, but ASP/VBScript is (officially) an abomination of IIS and the Windows operating system. I think at this point most would realize their mistake (developing software with ASP in the first place) and work towards correcting it by rewriting the application in a better language (and one that was cross-platform), but it seems that Fog Creek decided instead to develop a "compiler" (converter, or what ever you want to call it) that could convert ASP into PHP so that they could run their application on Linux boxes without having to completely rewrite it.

Now I don't know how many lines of code FogBugz was at the time or how complicated this "compiler", dubbed Thistle, was to write (it's over my head, I know that); but I know that it only took a few days of maintaining an ASP/VBScript application for me to begin begging almost daily to rewrite the application in something more sane. They didn't stop there, however. They apparently realized that VBScript combined with Thistle was too limited in what it could do (light bulb, anyone?). They missed their second opportunity to change platforms and instead wrote another "compiler" that extends VBScript's functionality, adding modern features that probably should have been there in the beginning, and spits out either PHP or .NET. They call this monstrosity Wasabi. All of that trouble to extend evil that never should have existed in the first place. I can't imagine that developing Thistle and Wasabi was faster than just rewriting their application in a cross-platform language, preferably an open one that will be around for a while, and leaving it at that.

Apparently they keep both of these tools internal. So not only did they create such evils, encouraging the persistence of VBScript-ness, but they didn't even release it to the world for them to benefit (free or otherwise). WTF.

I'm somehow less enthusiastic about FogBugz now..