Recently in Linux Category

Project Heresy Redux

| No Comments | No TrackBacks
linux_inside

linux_inside (Photo credit: Adriano Gasparri)

Some years ago I described my experiences converting a non-technical computer user to the Linux desktop. In fact, it was nearly 10 years ago to be precise. I remark on that experience now as there is a great deal of talk regarding the end-of-life of M$ XP and the real possibility of Linux replacing XP or at least having common Windows users considering Linux as a viable candidate.

Although, my first experience working with a novice and non-technical user was clearly a mixed bag at best. In truth, that experience could be considered a failure due to the fact that particular user complained that there was no exact replacement to Outlook. Another issue was certain attachments did not open upon clicking the hypertext link in the email.  All of these are valid points, but honestly that was over 10yrs ago and the Linux desktop has made great strides in the quality department.  In truth, I have not used Evolution in quite awhile, so I do not know if their design and software development strategy is still to simply mimic Outlook. Hopefully, the Evolution project has evolved to do far more than mimicking a commercial piece of software.  For whatever reason, when people are faced with the prospect of using a new software package they automatically think certain commercial packages are the gold standard on how things should work. If the new software offering does not have the identical bells and whistles that they are accustomed, it is assumed to be a shortcoming.

This appears to be flawed reasoning in my book. I digress, but anyway. Now while many people are talking about replacing WinXP with Linux, I am actually getting a chance to do that. One of my friends has decided to take the plunge and replace the XP box with Linux.

His requirements:

  • Share digital media to multiple M$ Windows clients on LAN
  • Access digital content remotely
  • Stream digital content to LAN

It is true that experience is the best teacher, so one of the lessons I learned when I converted a non-technical user to Linux nearly a decade previously is to temper expectations. While I know Linux (and for that matter BSD) is super operating system, it will be a significant challenge for a non-technical random Windows or MacOS user. Particularly if the user has spent their entire computing life using a GUI. It is important to note that there are significant differences from my earlier experience to that which I have been asked to do now. The previous heresy project was strictly geared around userland applications or desktop computer usage. In contrast, the present task at hand is to deliver a Linux server and not a desktop computer. The entire interaction will be done via command line and web browser. A typical headless monster scenario.

I explained in no uncertain terms that a computer lab book and help texts would be in order. I forewarned them that they would get frustrated, as the learning curve would be steep initially. Most importantly, I told them not to blame me, as they asked for the transition :-)

Again, I chose Debian as the GNU/Linux distribution.  Not sure if Ubuntu was available a decade ago, but perhaps I would have chosen that distro for the previous heresy project. Only because the desktop experience may have been a bit more polished with sufficient hand holding.

So in this instance, I set out to do a myriad of software installs. I needed to install a web and file server. SMTP server and some cloud application server software. I settled on Apache, samba, and OwnCloud.  I installed the i386 netinstall for Debian. I had not installed Debian netinstall in some time, as I prefer Slackware for my own use. I noticed thatn f Debian installer seems to assume that everyone users DHCP on their network. What about people that use Debian for a server? Why on earth would you choose DHCP for a server?  This particular default setting forced me to drop into a shell and manipulate network settings manually, so that I could get beyond the base software install. I would suspect that this would clearly intimidate someone coming from the Windows or MacOS world. Outside of this snafu the install was rather uneventful. I setup initrd and LVM. 

Sharing digital media to the clients on the local area network can be done with samba and Owncloud. Not tough at all.  Accessing digital content remotely requires a bit of finesse for an elegant solution. Obviously ssh is all most geeks needs, but that is not good enough for non-technical end-user. I installed OpenSSL and of course openssh. I explained that self-signed certificates would be best and most cost effective alternative.  I also showed them dyndns.org and their service for registering dynamic IP address. I instructed them on the use of puTTY software for secure access to Linux box from Windows computer. 

Thus far all is going well. I still need clean up the self-signed OpenSSL certificates, so that they can access OwnCloud with https secure protocol.

I will update everyone on the progress of this project at my earliest convenience.



Yet another installment of our Foray series of technical discussions. It has been quite some time since my last installment.  In a manner of speaking I have been otherwise preoccupied. Well over the past few weeks I have learned about the original IOS, that is the Cisco proprietary operating system for all of their networking devices. Perhaps one day I will sit for the CCNA exam?

I digress; however, I do plan to talk about my crash course indoctrination into Level 3 or managed switches. Suffice to say I have much to learn about IOS.

What exactly is hardware virtualization?    Why should it interest you?

Well most people now understand that there is generally a cost penalty for proliferating bare metal computing.  More specifically, for every computer running in your home there is some measure of power consumption penalty. In truth, I have never taken the time to calculate how much electricity our home servers consume during an average month.  In the basement, I run a VPN/gateway, PBX (aka re-purposed HP Pavilion running OpenBSD), Slackware workstation (aka AG desktop), managed 24 port switch, and a wi-fi access point. Actually, there were two more machines that ran continuously, but have since been retired (ie yet another HP Pavilion whose proprietary power supply failed and Pentium III based machine that ran my mythtv setup). Needless to say that is a healthy power consumption. I would guess at minimum each machine could easily consume 350W depending upon computation load.

I suppose the delightful promise of virtualization is the ability to spin-up an instance of your preferred operating system within seconds and then possibly deploy it locally or remotely.

Of course locally would suggest that you are running it on your own hardware and remotely would suggest that you are leasing space with a vendor (ie virtual private server, amazon ec2, or more recently GOOG).

Please note that this blog entry will not be an exhaustive explanation of para-virtualization and the differences between various commercially available hypervisors (ie virtualbox, Xen, etc). That is beyond the scope of this post and will be left as an exercise for the reader.  I merely wish to discuss my experiences with QEMU / KVM. 

Firstly, it is important to note that KVM is a kernel based virtualization strategy, which enables total virtualization of guest operating system of your choosing. It is not a para-virtualization platform like Xen. Perhaps what I like most about KVM is that the project is not owned or managed by a corporate conglomerate. So, there is no concern about the codebase being managed by shareholder interests. Since KVM is tightly bound to the kernel, you can expect it to work as advertised without issue.

For those of you who are already familiar with QEMU, you will note that it was the first Open Source alternative to VMWare. Apparently the project founder, Fabrice Bellard was more interested in a stable software than making a market splash. It would appear that the project has been in existence for at least 8yrs and it just reached v2.0.

When I was initially introduced to virtualization circa 2001, I used the student edition of VMWare, which I believe was v1.0. It was truly a memory hog, but certainly less painful than dual booting. At that time, the only reason I cared to explore hardware virtualization was to run the odd M$ Windows program. Fast forward to present date, I can attest that running Win7, Android or PC-BSD as a guest OS would offer immediate value.

I often am faced with supporting end-users who only run Win7, so I really needed to use my _legal_ copy of Win7 Pro and extract the CD image.  Below is the script that I ran at the command line to create the virtual machine.  The flags are interesting, but before I get into them let me share a bit of my experience running QEMU-KVM on Slackware.

qemu-system-x86_64 -enable-kvm -localtime -m 1000 -hda /home/agreen/.aqemu/Windows_7_HDA.img -cdrom /dev/cdrom -boot d

While you do not need to run an GUI to control and manage virtual machine instances, it is convenient to actually organize each of your virtual guests. Especially, if you are running several instances. In my case, I only ran one guest OS, and I chose aqemu. Some people seem to prefer virtmanager, but aqemu was simple enough to use and had a fairly lightweight GTK style GUI.

I immediately, discovered a well documented bug within Cairo, one of the graphical libraries. After perusing the Interwebs, I was able to install the latest Cairo library from Slackware-Current.  Apparently, the cairo library provided with Slackware 14.0 had some weird issue that prevented the aqemu window from refreshing or simply being drawn as expected.


Bridged Networking with KVM

Bridged Networking with KVM (Photo credit: xmodulo)

    

After installing the correct library all worked as expected and I was fascinated on the ease of pausing the guest OS.  I allocated roughly 8GB of disk space to the Win7 instance. As the '-m 1000' flag would suggest, the virtual machine will use 1GB of RAM. Somehow I believe there is a setting that permits the virtual machine to use increased amounts of memory on demand. I am certain that I did not choose that particular option.   /dev/hda was actually a portion of my home partition, managed by LVM.  So, in essence, I could grow the disk if necessary without disrupting anything on my host box. The "/Windows_7_HDA.img -cdrom /dev/cdrom -boot d " statement is simply defining the boot device or path to virtual cdrom drive.  Really fairly simple, I didn't not need any exotic virtual drivers and it did not take several hours to setup or install so do not believe the plethora of erroneous information in circulation regarding running Win7 as a guest OS within QEMU-KVM.

One major issue is the bridging networking feature. By default QEMU will create a virtual network, that will allow the virtual machine to see the outside world, but you will not be able to browse the local area network. After reading alienBOB's wiki, I see there is a means to get around this limitation, but it requires installing and setting up DNSmasq. I will revisit this blog entry once I get a moment to resolve that problem and setup bridged networking as desired.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  

klogctl: Operation not permitted

In recent releases of slackware I have noticed that dmesg is disabled by default for non-privileged users. I can imagine why this is done server installations, but it is somewhat of a nuisance on single user machines.  Nonetheless, this can be quickly remedied.

grepping for 'dmesg' in the run start scripts (/etc/rc.d/rc.*) you will find the following ->

/etc/rc.d/rc.M:/bin/dmesg -s 65536 > /var/log/dmesg
/etc/rc.d/rc.M.new:# Set the permissions on /var/log/dmesg according to whether the kernel
/etc/rc.d/rc.M.new:# permits non-root users to access kernel dmesg information:
/etc/rc.d/rc.M.new:if [ -r /proc/sys/kernel/dmesg_restrict ]; then
/etc/rc.d/rc.M.new:  if [ $(cat /proc/sys/kernel/dmesg_restrict) = 1 ]; then

The interesting string is -> if [ $(cat /proc/sys/kernel/dmesg_restrict) = 1

It is logical to suspect that the setting is indeed '1' for restrictive access to dmesg kernel messages. Simply doing the following as root resolves the problem

echo "0" > /proc/sys/kernel/dmesg_restrict  and of course to confirm that the previous command worked as desired.

The following command should result in "0"

cat /proc/sys/kernel/dmesg_restrict  and indeed it does on my system.

Now you can exit root shell and retry dmesg as non-privileged user.

Geek fatigue

One of the most tiresome aspects of sysadmin work are repeat tasks. It has been said that sysadmins are lazy by nature. I would have to agree. A good example is having to add or delete users from a Linux box. Adding one user at a time really isn't an issue if you only have one or two; however if you have several accounts to add it can become quite tedious.

I discovered the script called newusers which allows one to add several users at once, you simply need to setup a flat text file with a few parameters shown below:

pw_name:pw_passwd:pw_uid:pw_gid:pw_gecos:pw_dir:pw_shell

Actually, you can provide all or just some of the parameters above. I chose to only provide a username, password, home directory, and shell. So you have the following ->

username:passwd::::homedir:$SHELL  (Note the ::: represent blank field parameters.)

I simply stored all the users in a file

username1:passwd::::/home/username1:/bin/bash

username2:passwd::::/home/username2:/bin/bash

Another great script is the 'mkpasswd' as it creates a 9 random character passwd. Obviously, these scripts used in conjunction are a wonderful tools. They are particularly useful when setting up multiple user accounts on a new server. 

As I have mentioned previously, when I setup new user accounts on a servers which I build for clients, I setup samba username and passwds.

In the process of setting up these samba accounts, I have encountered problems with the ubiquitous 'smbpasswd -a username' which basically adds new users to the samba database.

However, there are often times when 'smbpasswd' will not work as expected.  There are tools that mitigate this problem.  tdsam and pdbedit will also repair or create samba users for the samba database. 

I will share some additional scripting measures for creating samba user accounts in a forthcoming blog entry






Wonders of setgid

Samba Project is perhaps the best example of the value of reverse engineering. More specifically, I would assert that Samba is the critical glue application that provides tremendous value to Linux kernel and also helps to transform the operating system into a formidable player in the heterogeneous networks where M$ client desktops are perhaps ubiquitous. 

At the moment my holy trinity for large deployments would be apache, openvpn, and samba.
For the unfamiliar, Samba allows the linux kernel to speak and understand the CIFS or SMB protocol that is native to the M$ operating system. The fantastic reverse engineering work of Andrew Tridgell and the rest of the Samba Project team makes all of this transparent to the end-user.

From time to time, I run into access control problems on the file system level. When creating samba shares, I generally follow the create mask and directory mask convention of 0755.  This is supposed to ensure that owner rights are preserved when new files and directories are created.  However, there are instances when an owner creates a directory on samba share, with strict create and directory masks enforced, and this new share will assume owner and group permissions of its creator.  Inevitably, this condition will cause problems. This is particularly true when you have multiple users and samba shares. In these situations, users are creating files and directories on a daily basis.  The unexpected changing of directory permissions can be quite annoying.

After perusing the interwebs, I found that the following technique works to ensure that directory ownership permissions will not change, when files and directories are manipulated within a samba share.

I ran the below command against all of my samba shares. 

find /some/dir/path/ -type d -exec chmod g+s {} \;




Perils of Improper Disaster Recovery

I have been using LVM for personal and client usage. I have found it to be quite robust in managing swaths of disc space. Lately, I had begun to realize that I needed to address redundancy and possibly consider cloning some of the running applications. Primarily, I am running web based applications (ie SugarCRM, Moodle and OpenEMR).  At the core it is basically, apache and mysql database. I have heard of the advantages of Amazon EC2 and other so-called cloud solutions. Nonetheless, the strategy that I have deployed is a steady dose of rsync coupled with a very reliable storage repository called rsync.net   For those who might not be familiar with rsync.net, they provide a very reasonable and robust archiving and storage space. The cost is reasonable and they are absolutely a DIY shop. Nonetheless, if you need help with scripting backup solutions, they provide a very comprehensive documentation archive to help end-users get the most out of their offerings.  Primarily all you need is a set of ssh public keys and some ability to run rsync daemon.

All of this aside, I can say with certainty that the robustness of LVM comes with a complexity penalty.  A blend of dd and carelessness corrupted one of my LVM partitions which unfortunately had several directories whose filesystem were not backed up.  I spent time use a hex editor to carefully, parse through the metadata on each of the LVM volumes, hoping to find discernible differences. The idea here would be to look at the time stamps and then restore the LVM partition to its earlier state. All of these attempts failed miserably.

  • LVM

LVM (Photo credit: Luis M. Gallardo D.)

















Ultimately, after being unable to make sense of the inconsistencies within the various LVM volume groups, I simply had to punt and reinstall the operating system. Luckily there _were_ some back-ups available. Unfortunately, ssh keys, VPN certificates and such had to be recreated. Lots of work was lost and some very painful but necessary lessons were learned.

  • Establish a fault tolerant means of restoration
  • Automation of system configuration
Because LVM can be fairly complex to restore when a volume gets corrupted, it is important to ensure filesystem integrity. It is not enough to take snapshots of the logical volume. This is particularly true when your volume group occupies two physical disks. Though it is possible to discern where data begins and ends within a logical volume, it certainly is not for the faint of heart.  So, robust filesystem back-ups should be the order of the day. Again I would encourage the use of rsync and a virtual repository like rsync.net.

lvm offers a means to take snapshots of selected logical volumes. It basically retains a frozen copy of portions of the volume that change over time. Though this is not the same as ZFS or hammer of BSD, it does provide a measure of fault tolerance.

Next the idea of automated system configuration is the dream of every sysadmin. The gruesome prospect of having to re-install software and re-create user accounts on a system which must be restored is the equivalent of a splinter in the eye ball. Not much fun at all.
I had heard about CFEngine, Chef and Puppet, but had never deployed either configuration management tool. It appears that Puppet has a vibrant community around that project, so I thought it would be good to learn how to use Puppet. Won't get into a How-To for deploying Puppet, as it is beyond the scope of this entry, and there is a plethora of information on the Interwebs.  My plan is to mirror the bare metal server configuration on a virtual machine that I lease. The challenge of spinning up a database and an assortment of PHP based web applications was a bit daunting for newcomer to Puppet. So, it will take me a bit more time to learn more about configuration management automation. Until then, I will be using bash scripts and manual labor to accomplish the restoration of the user accounts and software. 

2011 Ohio Linux Fest

OpenEMR Logo

Image via Wikipedia

Though this is not my first OLF experience, it does mark the first time that I have come on a Friday. As the conference extended its offerings, the OLF organizers decided to create certification and Health Track content on Friday.

Last year I was on a mission to learn more about electronic medical records, specifically OpenEMR. During 2010 OLF, I met Fred Trotter, who while not a current contributor to the OpenEMR project; he introduced me to the Availity clearinghouse. He also gave me some helpful suggestions on how I might begin to contribute to the OpenEMR project. This year I had the pleasure of meeting Dr. Sam Bowen, who is not only a Gentoo user but also an Internal Medicine Physician. That is remarkable indeed. He gave me some pointers on how to upgrade from a OpenEMR v2.9.0 experimental to latest stable release v4.1.  Bowen gave me a full run down of the meaningful use effort for OpenEMR and some of the struggles associated to funding the effort.  He also was gracious enough to help identify some of helpful third party companies that are available for hire to do additional development on OpenEMR.  We exchanged business cards and I plan to drop him an email right away.

Another item of note was that I participated in my first GPG key signing party. I'll have to capture the finer details in another entry. GNU Privacy Guard (gpg) allows one to deploy publicly available cryptology (thanks to Phil Zimmerman) to encrypt email and other documents. Basically, key signing is the act of verifying that the signature indeed originates from the rightful owners.  You physically verify the key fingerprint by having the owner read the fingerprint then you exchange picture identification with other key signing participants.  Very cool indeed.  I'll summarize some of the talks I attended in forthcoming entry.  I learned much more at the 2011 OLF than I had at other previous OLFs.

Lastly, I ran into the urban camper named Klaatu. Who is also a fellow slackware user. We chatted briefly, but did not get a chance to exchange public GPG keys.  Oh well,  perhaps next time. 


Experiences with CUPS

CUPS

Image via Wikipedia

I have been running Common Unix Printing System(CUPS) for perhaps 10 years.  Prior to CUPS I used LPRng and simple LPD which are basic BSD variants for UNIX-like computing systems.

CUPS has become the defacto printing solution for Linux, I'm not exactly sure why it was chosen, but it certainly seems ubiquitous with most Linux distributions. Though CUPS was purchased by that Cupertino hardware company that I love to hate, it still gets widespread use.

Running CUPS can either be exhilarating or very humbling.
I still recall a time where a conducted a demo for WLUG and I failed miserably in my attempt to install and run CUPS within 15-20 minutes. Some other folks have also stubbed their knuckles and ranted about this software. Now that I am a bit wiser and more familiar with the print services here are a few pointers.

You'll need to understand the following

  • Hardware Compatibility
  • PPD files
  • Samba
  • Ghostscript 
  • Basic Networking
Before purchasing a printer be sure that it supports Linux. Typically most HP printers will work quite well. I have had good experiences with Brother printers too. If you do not take the time to verify that your printer works with Linux, you will most certainly become very frustrated.

At our small office we've used a Dell Color Laser 5110cn and Canon MF8350cdn multifunction laser printer. The latter released its printer drivers under the GPLv2. It took me awhile to find these drivers (PPD files).  As it turns out Canon released these drivers for Canon Japan, I stumbled upon them running a GOOG search.  A very helpful community run site is linuxprinting.org.  They have several hardware compatible drivers. I will one day add the CanonMF8350cdn to their list.

I'm not going to delve into PPD files. Suffice to say, they are essentially print drivers. While you do not need Samba to run a reliable print server with CUPS, Samba can help you with authentication and can also serve as a domain controller in a M$ windows environment.  In this particular case I used Samba to share the Canon multi-function printer as a resource to four windows clients.  The printer is attached to a Slackware 13.1 box that is serving as a print server.

A word about Samba...
Windows clients expect SMB/CIFS to be spoken over the network and Samba allows Unix-like systems to address this need rather easily. Shared network resources can be accessed and the appropriate handshaking authentication can take place. Samba handles the often used NT LAN Manager rather well. Nuff respect to hackers Andrew Tridgell and the rest of the Samba Project team. Samba is an indispensable networking tool on any heterogeneous network. 

Linux and Unix-like operating systems work best with printers that speak native postscript. I leave discovering the definition of postscript and its origin to the reader, as it is beyond the scope of this entry. However, I will state that you will need to install Ghostscript to successfully get printing to work on Linux, BSD and Unix systems. Basically, Ghostscript is software interpreter for postscript. In the cases where your printer may not speak native postscript, you can run Ghostscript to help fill in the missing bits that your Unix-like operating system expects.  Actually, most consumer grade printers inkjets do not speak postscript in the manner that the aforementioned operating systems expect.  Most laser printers speak some proprietary PCL. If you really wish to learn about the origins of postscript and the many variations of PCL. I'd highly recommend this paper, written by the author of the LPRng project, Patrick Powell. 

Lastly, you'll need to understand basic networking. As trivial as it sounds, you'll need to be able to ping each client in your network to be sure it can receive packets that originate from anyone on the LAN.  Once you've established good connectivity, the CUPS logs will become useful.

Below is a snippet of /var/log/cups/page_log

Note that 'nobody' is considered a guest account on the server. 

Canon_MF8300_Series_ 104 nobody [06/Jul/2011:17:36:06 -0400] 1 1 - 192.168.1.200 smbprn.00000072 Untitled - Notepad - -
Canon_MF8300_Series_ 105 nobody [06/Jul/2011:17:36:34 -0400] 1 1 - 192.168.1.200 smbprn.00000073 Untitled - Notepad - -

As an aside, I detected an annoying problem with "user" authentication within Samba.

For the moment, I'm forced to use the "share" security model.  The problem seems to originate from Win7 clients. I believe the issue is the NTLM. I seem to recall the fix deals with hacking the Windows Registry of the Win7 clients. 

In summary, CUPS is an ideal solution for printing on Unix-like systems. However, you must be sure to do upfront research to avoid purchasing the wrong type of printer.






  

2010 Ohio Linux Fest

Although it has been nearly a month since the event took place, I thought it would be good to share some my experiences. This year I decided to get to the venue Friday eve instead of waiting until Saturday morning. Shared a ride out to OLF with my buddy Mack. I used some microblogging tools to connect with co-founders of BlacksInTechnology (BIT), Greg Greenlee and Ronnie Hash.  In fact, I had a great conversation with Greenlee a couple weeks back. We talked about engineering, the importance of BIT, and hacking as it pertains to people of color. It was different not to be the one asking the questions; however, I was flattered that anyone cared enough about my story to give me a few minutes to shine :-) 

Regarding the OLF - I spent most of my time in the Medical track that was new for this year.
I sat in on the OSCAR talk. It is amazing how personal health registries and client / server based technologies have come full circle. OSCAR client software seems to be built on a Java stack. It would seem that Java isn't going anywhere, but developers of OSCAR are very worried about Oracle's evil intentions.  I'm not sure where it will end. I do know that in the  near term they will need to consider alternatives.  I met OpenEMR developer, Fred Trotter at one of the medical track talks. It was great to converse with him. I was particularly interested in electronic billing clearing house. He threw me a couple bones, and gave me some ideas on how I could help the OpenEMR community and also get a couple of my itches scratched at the same time.

Trotter offered some interesting advice.. He said, "If you're spending your time working on things that do not improve the bottom line.."  I suppose I always fall into that trap. Limited resources will make you become a jack of all trades and master at none. Nonetheless, if I have to manage telephony solutions, email server, etc.. I still believe that I rest easier knowing that I own and manage my data end-to-end.  Perhaps one day it will be ok to give up some of this control.

The other talk was the Nagios Project. Truthfully, I really have not done much with network infrastructure tools. Actually, had intended to use Zenoss, but after sitting through the Nagios discussion, it was clear which software had the market share.  In fact, in a weird coincidence the Zenoss discussion took place immediately following the Nagios talk in the same room. I noticed a stark decrease in the number of attendees.  Apparently, Nagios has greater uptake in the community non-commercial space. For Nagios to work for my needs, I'd likely have to install it in a virtual machine on a cloud computing install. I'll eventually get down to explaining my first foray into cloud computing in a future entry.

Lastly, I ran into Dave Yates of LottaLinuxLinks netcast fame. He was a cool guy who was helping out at the TLLTS booth. He showed me his Nokia N810 tablet complete with his comics collection on microSD. Pretty slick. We agreed to collaborate on a project in the future.  I was so formal that I gave him one of my business cards, hopefully he didn't think I was some flake. I have yet to hear from him ;-)  Jokes aside, I really do miss my netcasting days. It was fun virtually meeting people and have technical dialogue on a variety of subjects. Although, "AG Speaks" has been on extended hiatus, I do plan to eventually resurface begin putting out content.

The keynote was delivered by Christopher "Monty" Montgomery the creator of the ogg container and the founder of Xiph Foundation. Xiph is responsible for the ogg multimedia containers for both audio (vorbis) and video (theora). Ogg is free as in free speech, and is not patent encumbered like mp3 and aac multimedia codecs.  Monty briefly touched upon the fact that H.264 can now be used royalty free.  He does not believe ogg will be effected in negatively by this development. However, he understands that free software should be the vanguard of change and he insisted that developers help in that effort. He was actively recruiting the best and brightest to work along with Xiph Foundation. The GOOG's WebM effort and Xiph will make for an interesting discussion. I am not sure that he addressed that development at any great length.

It appears that OLF grows each year. I am glad that they are adding new tracks each year. The medical track appeared to be a huge success. Hopefully, they do it again next year. There was a Diversity in Open Source discussion which I missed because I took off late Saturday eve.  Perhaps if they change the scheduling of that particular discussion, it might receive a larger audience.  There really is a dearth of hacker specific Linux conferences. I have not attended South East Linux Fest (SELF), nor Southern Cal Linux Expo (SCALE). Oh how I long for the days of Atlanta Linux Enthusiasts (ALE), which was actually my very first Linux Conference. OLF comes extremely close to that experience. Linux World Conference and Expo (LWCE NYC) was a good conference but later became saturated with suits.


The Slackware logo.

Image via Wikipedia

Roughly 3.5 yrs ago I purchased an Acer Aspire 5100. Initially, it was running Vista Home. I promptly wiped the hard disk and installed Debian/Stable. For whatever reason, I ran into some very weird kernel regression problems related to clocksource (TSC unstable due to cpufreq).  Finally got fed up and installed slamd64 (64bit slackware port). At that point official slackware did not exist on 64bit AMD or Intel. I believe the official Slackware64 port was released 5/19/09. Eric Hamleers and crew did an outstanding job.

So, earlier this year I began scouring the interwebs looking for clues on how to upgrade directly from slamd64 to slackware64. Typically, the "upgradepkg pkgtool" and glibc along with a host of base slackware packages "a - through - l" will do the trick. However, since I had already backed up the notebook there was no reason to play around with a live update. I wiped the disk clean and installed slackware64-13.0.  When I was running  Debian on that notebook, I ran LUKS (Linux Unified Key Setup) encryption on the /home and root files systems.  *Quick Aside* encrypted file systems are useful on your notebook computers.. It keeps people from getting at your confidential data..
Somehow, the Debian installer presents LUKS as an option during the install process. Though, some may see this as an advantage, I really never understood what was happening behind the scenes. I didn't realize that the /boot partition was not actually being encrypted (and for good reason).  LVM is fairly trivial to setup, in fact my notebook sports a modest 80GB disk so it is questionable whether I really need LVM.  Nonetheless, I figured that I could then be able to grow and shrink partition sizes with LVM installed. That could never be a bad thing :-)

After making the choice of running Slackware on the notebook, I used some very helpful README files. LVM and its LUKS companion. Unfortunately, I munged the install by incorrectly setting up initrd so that the bootloader LILO can boot the root filesystem.
For those unfamiliar with initrd, it is a temporary filesystem which loads specific instructions into ramdisk (ie kernel modules for hard disk, and file system, etc) to aid in booting the  Linux kernel.

Actually, in my earlier days of running  slackware (before the era of exotic hardware and large hard disks) I never needed initrd.  Somewhere around kernel 2.4.x I began using as Volkerding suggested it in the install README files.  Anyway, if you do not setup initrd correctly, you'll be greeted with a nice kernel panic :-)

How does one rescue their installation when this occurs? Here are some steps that should help. Once again, I was grateful for the help I received from alienbob   alienBOB on ##slackware.
Please be advised that these instructions are geared towards my setup.
For my LVM setup, I only had two volume groups.
I had chose not to leave one partition /boot (/dev/sda1) unencrypted and the entire root file system /dev/sda2 utilizes LUKS.  You must also remember to keep your LUKS passphrase in a safe place. Perhaps store it on an encrypted USB keychain or simply in your brain.
If you forget the LUKS passphrase, you will have to wipe the hard disk and re-install your operating system.

Do the following -
Boot your system with the Slackware install CD/DVD (actually any linux distro live CD should work)
Run 'fdisk -l' to view your partition table
cryptsetup luksOpen /dev/myrootfs cryptslack
Respond with your LUKS passphrase
vgscan --mknodes
vgchange -ay
Run 'ls -la /dev/cryptvg'
lrwxrwxrwx  1 root root    24 2010-09-25 08:27 home -> /dev/mapper/cryptvg-home
lrwxrwxrwx  1 root root    24 2010-09-25 08:27 root -> /dev/mapper/cryptvg-root
lrwxrwxrwx  1 root root    24 2010-09-25 08:27 swap -> /dev/mapper/cryptvg-swap
Now we'll mount the LVM partitions and setup our chroot environment
The chroot environment is necessary for us to correct the initrd setup on our native root file system. Remember that the installer has its own rootfs, /etc/fstab , etc.. If you're unclear about chroot. I'd recommend reading up or using the ubiquitous GOOG search.

After running 'ls -la /dev/cryptvg'
mount /dev/cryptvg/root /mnt
mount /dev/cryptvg/home /mnt/home
mount /dev/mybootpart  /mnt/boot

All of your LVM volume groups and boot partitions should now be mounted. Next we must setup our chroot environment..

mount -o bind /proc /mnt/proc
mount -o bind /sys  /mnt/sys
mount -o bind /dev /mnt/dev
chroot /mnt              --> gets us into chroot

While inside chroot ::
vgscan --mknodes
vgchange -ay

At this point you should be able to see your installed slackware root filesystem. Be sure to check out the /etc/lilo.conf and the filesize of initrd.gz.  My clue was that initrd.gz was a mere 20kb. This told me that initrd was not built correctly.

So to fix initrd do the following:

mkinitrd -c -k 2.6.29.6 -f ext3 -r  /dev/cryptvg/root -C /dev/sda2 -L -o /boot/initrd.gz 
Of course replace with your actual Linux kernel version number and using the explicit path to mkinitrd would be wise too. Take a look at man mkinitrd to decipher the parameters.

After rebuilding mkinitrd check the timestamp and filesize of /boot/initrd.gz.
They should be different from your originial initrd.gz

After you again check of /etc/lilo.conf for errors (pointers to correct boot image and initrd.gz) Run '/sbin/lilo -v'
type exit to leave chroot
reboot..

All should be well again.. If all went well, after your reboot, you should be prompted for your LUKS passphrase. If not rinse and repeat ;-)










Ode to the ARM

If you haven't been living under a rock the past few years, you might have noticed a growth in marketing of smartphones.  If you have not noticed the television marketing of the dominant players which use Android and iOS, various social networks are also buzzing with this marketing. Besides the wrangling of hobbyists and loyalists, we also have financial pundits weighing in on the debate. People seem to praise that Cupertino company for its inroads and its supposed stance on HTML5 standards. Now that the H.264 is being provided royalty free, it changes the game somewhat. You will begin to see more of those one purpose commodity flip video recorders, which only encode H.264. I digress and will expound on this point in a future entry.

What I find most interesting about the ARM architecture (Acorn RISC Machine) is that it challenges the duopoly of Intel and AMD (Much like Linux and BSD has done in the desktop and server markets). For the sake of argument lets assume that there are virtually no other players in the CPU space at the moment. Having said that the dominant players are fixated on the desktop and server markets. 64bit architecture is becoming increasingly popular, despite the fact that most programs that you'll find on the garden variety desktop computer are still 32bit.

Anyway, Intel and AMD abandoned the low power applications for quite awhile since there was not a great deal of profit margin. ARM has been around since the 80's, well before set top boxes or mobile devices became common place for consumers. Low power and extended battery life are some basic tenants of the ARM architecture. Additionally, developers get the added benefit of working with essentially "royalty free" architecture. The latter is the "killer" feature, that makes it very difficult for the incumbents to compete. Simple RISC instruction set coupled with developer friendly licensing schemes. Wait there is more..

Because RISC has been around for quite sometime it is well understood, thus Unix and Unix-like operating systems thrive on this platform. Though ARM has made incremental changes for performance, everything is well documented standards based engineering.

Obviously, ARM isn't perfect because people still complain about battery life. I would suggest that until we discover how to reconstruct matter.. Well basically there is no such thing as a perpetual heat engine, thus we'll likely be complaining about battery life for quite awhile.  ARM still beats Intel and AMD quite easily when you begin battery life and heat dissipation.  Perhaps if I get some time I'll share some of the benchmarks which can be obtained rather easily on your local Interwebs.

Personally, I have three ARM devices. Let's count them.. Linksys NSLU2, Nokia N800, and Nokia N900 (Cortex). How many do you own?


M$ Ill-fated VoIP Play Re-Visited

open_source_communism

Image by jagelado via Flickr

Though this is quite old news, it emphasizes the power of free software and open source tools.
For some it really is difficult to fathom how Asterisk, Open Source telephony framework which does little if any traditional commercial advertising can grow is size and defeat Response Point, a proprietary offering from M$. If you are one of the confused people, allow me to use the Apache web server as an analogy.  At the dawning of the Internet, Unix was the cornerstone operating system. The balkanization or splintering of Unix caused its eventual downfall and allowed Windows to flourish. Fast forward to the web browser wars of the early '90s, and you have Netscape battling the Redmond Woolly Mammoth and its Internet Explorer. Most everyone knows how that ended; however, Netscape begrudgingly sowed the seeds of the Mozilla Project which later released Firefox. Now you're probably asking, "What does this have to do with VoIP?"
Just trying to paint the picture for you.. Be patient ;-)
M$ had just discovered the Internet several years after Unix was the dominant player in that space. Redmond used unfair business practices to game the system and supplant Netscape as the dominant browser. Redmond also tried very hard to create a very closed Internet experience, one that was "best suited for Internet Explorer" browser. I'm sure many of you will remember those websites that explicitly demanded that you view it with IE.  To hell with standards right?? 

IMHO, this gestapo move failed largely because of the Apache Foundation and the Mozilla project.  The Apache webserver is a very successful free software project. It used standards based reasoning to gain roughly 50-60% of the market share. There governing board is quite small and Apache Foundation is certainly not a Fortune 100 company. Truly remarkable and is a testament to the F/OSS product development model.

Now getting back to VoIP.  Why was Redmond's attempt to get into the VoIP and CRM market ill-fated?  Was it the price point? Was it due to the fact that its business model was highly dependent upon leveraging already existing M$ application software plays (ie Sharepoint, .NET, etc). I call this perpetual up sell. How about the fact there were several F/OSS equivalents already were entrenched in the VoIP and CRM space (Asterisk, Yate, FreeSwitch and SugarCRM etc).  I would assert its demise was a combination of all of the above.

It would appear that Response Point was yet another example of M$ missing the boat and getting to the party too late. Very much like its discovery of the Internet in the early to mid 90's. The only difference is that they could not embed their VoIP product in the operating system, and of course Asterisk, FreeSwitch, Yate bear no resemblance to  Netscape Navigator. Nope, M$ would now be forced to compete on a relatively level playing field against software that was not owned by anyone. How do you defend yourself against and virtually invisible foe? In most cases when you fight an invisible foe you get beat upside your head :-) Particularly if the adversary is agile free software.  Additionally, it would appear that the Redmond wayward product could only be marginally useful if it was coupled with Active Directory, Sharepoint and .NET

The release cycles of Response Point were simply too slow. For instance Redmond launched the product 3/07, SP1 was released 7/08 and SP2 was scheduled to be released 1st Qtr 2010. Talk about gaps between releases.. In contrast, the release cycles for Asterisk, FreeSwitch and Yate are far more frequent. It probably is unfair to compare Redmond's wayward VoIP effort with a very mature Asterisk Project. According to the TFOT, Spencer conceived that telephony software as far back 1999. So I'll focus on freeswitch and Yate, considered new comers in the free software telephony world.  Freeswitch released its v1.00 in '08 and Yate released its v1.0 in 7/06. According to the Yate project roadmap, it is releasing software in a 3-6 month cycle. That is impressive, but certainly not unusual for F/OSS projects.  Release early and often with the appropriate amount of community support.

Methinks Response Point was also too closely coupled to M$ back office applications to be truly useful for the average SMB. Unless you've already committed to being 100% M$ shop, I would argue that licensing costs alone would make their VoIP play too costly.  Besides, they hadn't even figured out the Outlook integration, this "feature" was not due to release until SP2.  If you're using any sort of dashboard or GUI in your telephony offering, click-to-call is an expected feature.

Response Point died in the womb, due in large part to F/OSS offerings that were more mature. M$ was unable to generate significant interest in their product due to a host of reasons. A Redmond customer resource management (CRM) tool is also in the wilds, but is likely on life support due to the large number of competitors in this space (ie SugarCRM, SalesForce, etc). 







Enhanced by Zemanta

WPA2 Hex-Key Passphrase Limitations?

Industrial wireless access point

Image via Wikipedia

I suppose it is common to limit passphrases to 63 characters on Linux machines.. Perhaps this is a well established standard. Actually, I've used a 64 character hexadecimal passphrase for my home wireless access point for the better of part of 4yrs. The majority of the people who visit my home are using windoze notebook computers and the passphrase seems to be readily digested without incident. However, now that I now have three linux powered wireless devices (Nokia N800/N900 and Acer Aspire) it would seem that the 64 character hexadecimal passphrase cannot be digested using standard wireless tools on my favorite OS.

The passphrase was generated from the Steve Gibson's ultra high security password generators :-)  I seem to recall him advertising that the passphrases are ideal for Wi-Fi networks.

After performing the ubiquitous GOOG search, I stumbled on a couple of summaries written by folks who have had similar problems. It seems that the 63 character limit helps to distinguish between ASCII-key input and hexadecimal-key input. At least this would seem logical for the Nokia Nxxx internet devices. I'm not sure if the explanation is similar for wpa-supplicant on your garden variety linux system.

Anyway I'll try this hint and report my results when I get a moment.
Enhanced by Zemanta

Foray into OpenVPN

Diagram of a public key infrastructure

Image via Wikipedia

It has been awhile since I shared another segment in the Foray Series. For the newcomers, these excerpts are fairly detailed accounts of my experiences with various FOSS tools. Understand that these entries are not intended to be detailed How-Tos, I leave that to the curiosity of the readers. I would be remiss if I did not mention that I got inspired by Mick Bauer's recent Linux Journal series on the subject. So, nuff respect to Bauer and the LJ crew for being the longest running Linux technical periodical. If they could only reduce the advertising.. Heh, that's a rant for another time.

Actually, I was first introduced to the concept of virtual private network (VPN) in 2000. An associate of mine was administering a Free Swan installation. Honestly, I had no idea what he was talking about because knowledge of networking was rather weak. For instance, I didn't know the difference between IPSec or PPTP tunneling protocols. Suffice to say that there are several ways to implement a VPN strategy. It is also worth noting that some tunneling strategies are inherently more secure than others.

So I figured that I'd better learn about VPN tunneling protocols and also deploy a solution that is fairly idiot proof turn-key. As it turns out, the IPSec tunneling protocol has been around for quite awhile. It is fairly complicated to setup, but IPSec is an order of magnitude more robust when compared to Microsoft PPTP. The OpenSwan project (formerly FreeSwan) deploys the more robust IPSec tunneling protocol. The encryption algorithm strategy of PPTP is very inferior, in fact PPTP was designed on top of the very old PPP (Point-to-Point Protocol) from dialup modems. I mention PPTP here because it often is a readily available strategy on several modern commercial grade VPN routers. My Cisco RV082 VPN router provides PPTP out of the box as a VPN solution. This protocol is advertised as an easy means of creating a VPN tunnel between to M$ clients. I have often found that _easy_ is often quite dangerous too ;-) IPSec is also baked into the Linux kernel, so it can be deployed via iptables filtering at the kernel layer. In fairness to M$, it also deploys IPSec client on its more modern OSes; however, in true Redmond fashion they have "embraced and extended" IPSec in a way that puzzles most. Basically you have no idea what you're running, so is it really IPSec?

There is however a snag with the free IPsec clients from Microsoft. You can use IPsec only in combination with another protocol called L2TP. It is fairly difficult (2000/XP/Vista) or probably even impossible (MSL2TP, Pocket PC) to get rid of this L2TP requirement. One might say that Microsoft "embraced and extended" the IPsec standard, in true Microsoft fashion. To be fair though, L2TP is currently a 'Proposed Internet Standard' (RFC 2661 ) and so is 'L2TP over IPsec' (RFC 3193). PPTP, on the other hand, is another widely used VPN protocol but it is not an official standard.
The excerpt above came from this very good FreeSwan article

Perhaps it would be helpful to understand why I have begun to utilize a VPN. When I am traveling or working in Panera Bread or Barnes and Nobles, I like to take advantage of the public Wi-Fi. Typically what I do is fire up a terminal and ssh into my Linux box at home and port forward TOR and SOCKS5 to the local ports on my notebook computer. For the curious, check some of my earlier entries on that subject. This strategy works quite well, but when I ask staff members who are less computer savvy to open a terminal window and then run ssh.. Their eyes get glazed over and they begin to complain about it not being a "pretty" solution.

So a more elegant solution was required to provide wider adoption within our organization. Furthermore, you can't sell what you don't own. That is if you're going to propose an alternate solution for accessing data securely, you must be willing to use it yourself. Some people call it "dog fooding".. Besides, I'm always excited about learning something new.

As I stated earlier I looked at OpenSwan (FreeSwan fork) or more generally IPSec and it did look rather confusing to me. Moreover, I wasn't quite sure how active the developer community was around that project.

Let's take a quick look at what makes OpenVPN a quite viable VPN solution.

The PKI is the heart of OpenVPN, as it empowers the sysadmin to authenticate a host of clients through self-signed certificate/key pair which are generated on your own server. This approach is helpful, as it mitigates the need for a central signing authority. It works very much like creating SSL certificate for an apache webserver and associated client web browsers. Though OpenVPN is not overly complicated to setup, the PKI process is the area that will likely cause problems for many people. In fact, I scraped my knuckles during this process too. For instance, an incorrectly generated key may not show up until you try to authenticate a client. The error logs will reveal important messages, but are somewhat generic if not cryptic.

SSL/TLS are venerable and well understood IETF encryption standards that are deployed for many web and email servers. MD5 and SHA-1 are common place digest algorithms. No surprises here. Arguably TLS has its own faults and vulnerabilities, but because these are "standards" unlike L2TP the holes get discovered easily. Hence the community resolves problems fairly quickly.

All Linux distributions are equipped with Openssl and the means to generate certificate authorities. 'pkitool' is the front end for the openssl tool. pkitool does all of the certificates/key builds. You just have to remember to run "./clean-all" in the appropriate openvpn setup directory to wipe all previous keys otherwise your OpenVPN setup will fail silently but very consistently :-)

Regarding UDP... Many people prefer TCP over UDP, as the latter doesn't make any guarantees about the arrival of datagram packets. UDP is now exclusively used with openvpn, as UDP seems to play nicer with firewalls due in large part because packets are not resent upon failure.
The process of re-checking at the packet layer increases the overhead of TCP significantly. Hence the reason that UDP is preferred, as it is much faster albeit not as accurate. So you have a trade-off. My knowledge of packet inspection is limited, so if anyone has a better explanation, I'm very interested.

I'm running openvpn on a Debian/Stable box. It runs quite well. After I resolved my PKI issue, the only other gotchas occurred when I didn't explicitly set the packet routing for my server. More specifically, I had to echo 1 > /proc/sys/net/ipv4/ip_forward to enable packet routing on the server. Failing to do this will also break your openvpn setup.

Lastly, I had to pick the correct virtual IP address to push to both sides of the tunnel. In this case I was not able to ping either side of the tunnel. Using tcpdump I was able to ascertain that the packets arriving from the M$ windows clients were being dropped.. Once I enabled routing and changed the virtual IP addresses, the problem went away.
Originally, I had chosen 10.0.0/24 but realized that places like Panera Bread and others have the same IP addressing scheme, which played havoc resources located on the target server.

Lastly, the idea of pseudo two-factor authentication.. By definition, two-factor authentication is something you know (ie password) and something you have (ie secure token or passport). So openvpn PKI and a passphrase to verify ownership of the certificate authority and client key really isn't the same, but it sure feels very robust to me.

OpenVPN is a very robust solution for my needs. SSL/TLS is a fairly simple means of leveraging free software and well understood standards/protocols to securely encapsulate data packets on both sides of a VPN tunnel. However, I am not sure that enterprise networks would admire it the same. In fact, I know that my employer blocks the UDP ports which openvpn servers typically listen.

Hopefully, this sheds a bit of light on the various VPN strategies and also some of the virtues of OpenVPN.

Reblog this post [with Zemanta]

Mobile Open Source: Better lucky than smart

Assorted smartphones. From left to right, top ...

Image via Wikipedia

Every so often I get inspired by a random blog entry, and Fabrisio post gave me pause. Firstly, I have admired the Funumbol project and its very slick syncml application. Anytime you can have the ability to sync your smartphone to the server that _you_ own without any proprietary middleware, I must simply rejoice.

I have been pseudo-ranting about the state of Linux based smartphones. Most of my angst comes from the lack of competition in this space. I don't happen to be an Android fanboi, as I struggle with the tracking that GOOG has deployed in their default software stack. Sure, I know that most of the stuff can be disabled at the shell level, but I'm paranoid. He highlighted an alliance of sorts with Intel and Nokia - MeeGo project. I will have to learn about more about this MeeGo project, perhaps it could be something that bares watching. At least from the stand point of challenging the GOOG, people that profess to love free markets should be grinning from ear-to-ear. Geez, just 3-5 years the mobile OS market was quite stagnant. You had Symbian, PalmOS and crappy Windows Mobile. I suppose some might argue outside of iPhone's BSD with thin layers of proprietary paint and Android not much has changed. Well I suppose you might be correct if you were just looking at smartphones. The entire world of MIDs has really taken a leap forward in the last 3-5yrs. Outside of Apple Newton (device that few folks understood) which was peaked much too soon, and Palm devices there was not much innovation on tablets.

Somehow, I wish that Palm ALP could seriously challenge Android and offer consumers more choice. Perhaps this is just wishful thinking. I suppose time will tell.

FOSS Mobile - Better Lucky Than Smart

Reblog this post [with Zemanta]

Foray into MythTV

Diagram of a possible setup. The central serve...

Image via Wikipedia

At this point it is probably worthwhile to admit that I am guilty of "paralysis of analysis" in a big way. I suppose that having this problem can be detrimental when dealing with technology. This is particularly true when talking about multimedia hardware. In 2006, I purchased a throw away components with the intent of building my own mythTV PVR. I did nothing with hardware setup, I didn't spend much time configuring the software (Knoppmyth R5). In truth, the catalyst that drew me to the project in the first place was the dreaded "broadcast flag" that cable companies used to threaten consumers in an effort to appease Hollywood. I ran out an purchased the pcHDTV HD3000 for ~$185.00, as it was the only DVB that was oblivious to the broadcast flag. It would happily grab OTA digital content so that it could be viewed later.

I was so amped to get started with this project.. Life got in the way, and I realized that I don't watch that much TV. So, the hardware aged and the technology train left without me. The box was constructed of Intel Motherboard (i815 chipset), 512MB onboard RAM, 2 analog tuner cards Hauppauge WinTV PVR-350. The analog tuners are virtually useless now as most cable networks have killed their analog spectrum in favor of the gov't mandated digital spectrum.

*Sigh* Paralysis of analysis, I curse thee ;-) My mythTV box is functioning as a frontend/backend combo. It contains a modest set of ATA hard disks. There is a 250GB HD which contains the Arch Linux Distro (LinHES R6.01). There is also 350GB disk for additional storage. The myth box also mounts the NFS share from the household inexpensive NAS the Promise NS4300N. As stated in a previous entry, the NS4300N contains 4 - 1TB Seagate drives running RAID5. Approximately 3TB of usable storage.

Anyway, I still have the HD-3000, which will help me capture the unencrypted content on Comcast network. At this point I still need to tune the card so that I can use the appropriate frequency. I'm getting some channel frequency errors generated from the HD-3000.
DVB: adapter 0 frontend 0 frequency 959000000 out of range (44000000..958000000)
dtvscan[9374]: segfault at 0 ip 0804bf04 sp bfc85540 error 4 in dtvscan[8048000+
5000]

Running "dtvsignal" a script provided by the folks at PCHDTV

dtvsignal -q
using '/dev/dvb/adapter0/frontend0' and '/dev/dvb/adapter0/demux0'
setting frontend to QAM cable
tuning to 57000000 Hz
video pid 0x0021, audio pid 0x0024
dtvsignal ver 1.0.7 - by Jack Kelliher (c) 2002-2007
channel = 2 freq = 57000000Hz table 57
channel = 2 freq = 57000000Hz
30db 0% 25% 50% 75% 100%
Signal: | . : . | ._____:_____._____|

So here are the mythTV challenges -


  • Setting up the HD-3000 as the primary or default capture card

  • Tuning the HD-3000 to the appropriate frequency for Comcast

  • Setup lvm to easily handle storage growth

Obviously, I want to have access to the encrypted HD content so, I'll likely need to purchase the Hauppage HD-PVR (1212). Thus I'd be able to record two channels and also watch live TV.
At present, the current setup will allow me to record and watch live TV. I'd like to remove the analog tuners (PVR-350). This would free up space on the motherboard so that I could then retire the very old AGP NVidia 6000x graphics card with a PCI compliant NVidia 8000x capable of VDPAU.

Hopefully, I can resolve these issues over the next couple of weeks. Ultimately, the myth box will leave the lab and become the centerpiece of our entertainment system. There is still much to do before it can be wife tested :-)

More on this later.

Reblog this post [with Zemanta]

OpenEMR - At a glance

This will be a short summary of my first encounter with OpenEMR software package. I wish that I could say that I simply stumbled across OpenEMR, but in fact I had been supporting a joint business venture. At the time, the principal was actually resigned to use a turn-key or shrink wrapped electronic medical record. The rationale was that it would far easier to use a program that was specifically suited for their industry. There were a few pre-requisites, client scheduling, therapy notes, syncing calendars smartphones.

We immediately discovered that the shrink wrapped software, while quite polished and sexy looking did not afford us the ability to customize or provide the key features we were seeking. The idea of having the application share its data across applications was out of the question. Enter OpenEMR. I suppose it would be meaningful to explain the term electronic medical record. If you've been listening to the rhetoric that has been uttered by politicians and news media alike, the conversation of EMRs must have been discussed. In a nutshell, EMRs provide a unique to share, track and manage patient medical history. Perhaps most importantly, EMRs help empower the patient to actually manage their own medical history.
IMHO, EMRs help de-mystify the practice of primary care clinicians. Allow me to share a scenario... I decide to go skiing in Aspen. While on the ski trip, I suffer an awkward fall and brake my collar bone (not too far from the realm of possibility) and do not have access to my primary care physician. I get rushed off to the emergency room in CO. I get there and nobody can talk to me because I am unconscious and my wallet has been stolen. Wow, what a mess..

So, let's imagine for a second that I have USB flash drive or some other digital repository on my person. Additionally, the hospital or clinic has a electronic patient registry that contains information which would link my personal health record with that of my local physician. I know that this sounds futuristic and highly improbable, but this scenario is roughly the blueprint for the Patient-Centered Medical Home (PC-MH). The aforementioned link includes a short shockwave video clip that does a very good job of explaining the concept of PC-MH. At ~2.30min of the clip it mentioned electronic medical records (EMR). Anyway now that I've digressed, let's get back to OpenEMR shall we?


Reblog this post [with Zemanta]

Retiring the Smoothie

AMD K5 PR166

Image via Wikipedia

After many years of firewall happiness, I finally was forced to retire my smoothwall DIY firewall. I built this box during the time when it was sexy to create a firewall from scratch using a throw away beige box. In fact, the box had been powered by the venerable AMD K5 CPU (133Mhz).. It ran flawlessly since 2000. Well why retire it now? I simply need to reduce the thermal footprint in the lab. Secondly, I realized that the squid proxy seemed to be causing me some issues. Now that I run a managed switch Linksys SRW224P and Linksys VPN/Router RV082, I didn't see the point in running two firewalls.

Oh well, I can re-deploy the box for another purpose. It was cool to drop two NICs in the box and have complete control over the behavior of the packet filtering. The box matriculated from ipchains to iptables. How sweet it was ;-)

Reblog this post [with Zemanta]

Promise NS4300N - Revisited

rsync

Image via Wikipedia

Now that I have rooted the device it is infinitely more useful to me. My experience with this NAS has been rather interesting. As mentioned in a previous entry, I have four 1TB Seagate drives. The NAS is configured for RAID 5, thus I have slightly less than 3TB usable disk space. I simple 'df -h' yields the output below.
glutton:/VOLUME1/VIDEO 2.7T 198G 2.6T 8% /smartstor_video
Rsyncd runs on port 879. It seems that the Promise engineers setup the daemon to allow people to mirror their data with other NS4300N also running rsyncd. I still find it odd running rsync without encapsulating or encrypting the data with ssh. Somehow, I was initially confused with the idea of double colons '::' when denoting the destination or remote module name. For instance 'rsync -av host::src /dest'

I still need to write a script to back-up the home LAN nightly. Since there is no running ssh daemon (bummer no keys) on the NAS, I'm forced to store passwds in a text file in the appropriate /etc directory. Clearly not ideal, but it does get the job done. Despite the use of this flat file, I still get prompted for a passwd. Obviously, I am still doing something wrong. I still have not tried all of the dlna features of the SmartStor NS4300N. Universal Plug n Play (uPnP) is supposed to work out of the box. This will be useful when I begin streaming music to various desktop clients in my home.
Once I got NFS working properly, I was quite gleeful. Though Samba also works out of the box, I'm not that interested. I simply don't have very many windows boxen in my home.

These days, I store all of the home multimedia content on the NS4300N, it happily serves up movies to my Neuros OSD. So, I can play movies on my television in the living room. Not earth shattering, but very convenient. One day, I'll get off my dead ass and finish my MythTV project. Between my Asterisk excursions and other distractions or diversions, I never seem to be able to get everything done.

When I initially purchased the Promise NS4300N, I was merely interested in back-up and a storage median for all of my acquired digital content. The average DVD rip eats 1.4GB of disk space. I suppose I could rip my entire DVD collection (not practical for the average person), as I only have four DVD films.. Well, you could see how this would add up. Eventually, I'll have family videos and of course the wayward netcast AG Speaks audio show. I installed Mediatomb on the NAS, so streaming content should be fairly easy to achieve. Will post more about this device as time permits.

Rsync is a wonderful tool and IMHO, totally blows away the venerable tar for the tasks that I need to perform. I've tried unsuccessfully, to restore a 5GB gzip'd tar archive. I must say it was a very humbling experience for me ;-(

Reblog this post [with Zemanta]

Neuros OSD




Neuros OSD


Originally uploaded by AG_



I purchased this device roughly a year ago and it collected dust for nearly six months after purchase. I just never seemed to have the time to set it up. One day after listening to an episode of TLLTS, where Neuros OSD project founder Joe Born expounding on the OSD (Open Source Device) and its successor (Link), I figured that I would take it out the box.

For whatever reason, I had problems getting my TV to play nice with it. The remote worked fine, but I could not get the video out working. After spending a bit of time sending email to the mailing list and the ubiquitous GOOG searching. I decided that I would send an email to Joe Born.

Mr. Born was very happy to help me. In the true spirit of an Open Source, he was willing to share solutions. Even when my own hubris got in the way, Joe always took the position that the device should just work. I got the feeling that he assumed a personal responsibility to make sure that neither he nor Neuros Technology would leave any customer behind. A very refreshing perspective indeed. Particularly since the OSD is a first generation device for the Neuros company. It has already been superseded by their next model hardware. The Link. So, he really did not have to help me.

Well, I did get it working. As it turned out the Sony TV would display the OSD menu system on the DBS TV display option and not Video 1 output. The OSD is serving the role as a video extender, so it can play all of my video that is served up via NFS on my Promise NS4300n NAS.

I have only encountered a couple of problems. From the photo, you can tell that I've got every port occupied. The 16GB compact flash houses the Arizona firmware which of course runs Linux (2.6.x kernel).

It would be great if their was a means to reliably handle video using Wi-Fi. The CAT5 connection looks hideous, as I have to run a very long cable run to the device. This is more of an aesthetic problem and not a technical one.

After I setup the network settings and flashed the device, I was able to grab roughly six firmware updates. As I stated earlier, it had been sitting in a box for quite awhile. I like the ability to grab YouTube videos and play them on my TV. Only problem is the GOOG continues to change the manner of which videoclips are streamed or at least stored on their servers. Moreover, I am not sure if their are any Neuros developers still working on Arizona firmware updates. I would imagine that people simply have become weary of the moving target that is YouTube / GOOG video storage. Nonetheless, python scripts like 'youtube-dl' work just fine and I have been able to grab files from YouTube. Who knows, I'll have to visit #neuros or their mailing list to see if anyone has a solution.

Bottom line, I luv the device and it simply just works.

Reblog this post [with Zemanta]

Ohio Linux Fest 2009

Having missed OLF last year, I was determined to make the 3.5hr drive to Columbus, OH this year.
Though I arrived much later than desired, I enjoyed the following talks.

  • Fedora, OLPC Lessons Learned and Where Do We Go From Here - David Nalley
  • Open Source Telephony in an Economic Downturn - John Todd

It was interesting to learn the perspective of the value of OLPC some 5yrs after the project was realized. Nalley addressed a range of questions that touched upon XO deployment, global politics, education, and the problems created by Negroponte's choice of accepting the prospect of using M$ XP on the XO. Some people from the audience were disappointed in the dearth of XO availability in the US. The pervasive argument is that we have developing "villages" on US shores. What about our own children?

The G1G1 program is an on again off again program which does provide a means to get your hands on the neat XO hardware. Of course, the main crux is delivering these machines to developing nations. IMHO the entire netbook market was spawned by the OLPC project. Obviously, this was not Negroponte's intent. Nonetheless, OLPC has leveraged free software and also raised the ante in the computer manufacturing sector. OEMs must now begin to rethink their software design principles and of course deal with slumping sales which have led to razor thin profit margins.

I got an opportunity to meet John Todd, Digium Community Leader. It was fun giving him a hard time about his running OSX at a Linux Conference :-) Somehow we got into mobile handsets and I then learned that he had a compliment of 4 or 5 handsets. That deserves a wow. His talk was quite appropriate since we are in the midsts of one of the worst economic downturns of record. I particularly appreciated how he explained how Asterisk and other free software could provide economic freedoms not recognized by using proprietary software.

The stick figures he used in his presentation were also popular with the crowd.

Lastly, the keynote was especially satisfying, as it capped the entire, "Celebration of 40yrs of Unix" theme for OLF 2009. Dr. Doug Mcilroy provided the audience with an excellent account of the virtues as well as the vices of the venerable Unix operating system. One particularly humorous highlight was the SSH reference. He noted that there were at least 64 different switches (CLI options) that could be used to solve various problems using SSH. Certainly a far cry from the small tools used to perform one job well, which incidently has been the mantra of Unix users for a number years. Yes, of course Unix is user friendly, it is just particular about the friends it keeps :-)

It was fun chatting with Dr. Mcilroy after his talk, he made many contributions to Unix while working at AT&T Bell Labs. I asked him about his role in the development of the '|' (pipe) command. He modestly stated that he did not invent it, but he was the muse for its invention. Quite cool indeed. Heh, perhaps I'll meet Kernighan and Ritchie one day too.

Though, I got to OLF later than desired, I really enjoyed meeting all of the TLLTS guys. Of course were great too. This was my second OLF experience. I do hope to attend many more.

Reblog this post [with Zemanta]

In Search of the Linux Smartphone

the Neo 1973, the first smartphone using the O...

Image via Wikipedia


As I monitor the blogosphere for feedback on the just released Palm Pre. The results certainly have not been overwhelming. In truth, if Palm is unable to execute this product launch in a successful fashion, it would likely be there last opportunity to reclaim relevance in the smartphone market. I have always enjoyed the PalmOS, as it seemed that they had captured the mindshare of many developers. In fact, Palm founder Jeff Hawkins once stated that Palm purposely wanted their devices to be hackable with the hope that more developers would write applications for their devices.

However, with the recent launch of the Pre, some people have indicated that acquiring the SDK has not been seemless. Regarding relevancy in the smartphone market, there really is much room for the growth in this space, as there are only a few serious players, ie (Nokia, Apple, and RIMM). I didn't mention M$ because AFAIK, they do not make hardware for smartphones. Obviously, they do have a small share of the software stack for smartphone market. Clearly, Hawkins understands the urgency, so he hired two ex-Cupertino executives. Jon Rubenstein is credited with product creation effort for the iPod. In fact, he recently replaced long standing Palm CEO, Ed Colligan.

In truth, the majority of people who use cell phones are still tethered to the flip-flop style phones. Smartphones are still relative new comer and have a much higher cost of ownership. I purchased my unlocked Treo 650 on e-Bay roughly 4yrs ago for $225.00.
Since I have been using GSM/GPRS based phone, I have sworn never to go back to CDMA. I like the idea of a SIM card and the ability to have a functioning device in several countries.
Obviously, I do find the Pre's WebOS quite intriguing as it is running a Linux kernel.
According to other hackers, there is a bit of USB device driver slight of hand taking place, which permits functionality with iTunes. None of this matters to me, as I will never use iTunes, but I clearly understand that with Linux and open standards all things are possible.

There are other Linux based smartphones in production and on the horizon. Perhaps the most notable is the Neo FreeRunner, which is powered by the OpenMoko project. Unfortunately, there were several early manufacturing snafus and it appears that they have a glutton of inventory that they can not exhaust. IMHO the project still has a great deal of promise and contrary to popular belief, there is significant interest in a totally open smartphone platform. OpenMoko is essentially providing you the building materials to create your smartphone stack. The hardware is open, and of course the software is too. You can run any software image of your choosing. This is much more than the GOOG G1 is willing to offer. Although, Android is essentially a Java-based SDK it runs atop of Linux kernel. However, GOOG has attempted to appease T-Mobile by not allowing folks to gain root access. Nonetheless, if you probe deep enough, I would imagine that gaining root access is a trivial exercise. It is possible that Android will win out only because they have the tremendous GOOG capital warchest, and not due to any technical merit. This space does bear watching. Indeed it is a battle for the pocket and not the desktop. Linux can run everywhere and anywhere at any time.


Reblog this post [with Zemanta]

After a one-year hiatus, the wayward netcast has returned. I have been contemplating renaming the show. Clearly the show is _not_ about me :-)
The conversation generally center around F/OSS and web technologies.

My sincere apologies go out to my special guest Robby Workman, as I took much longer than I expected to publish the talk. We had a great discussion and would be happy to chat with him again soon. Consider this show a Part 2, of the slackware series. We talked about everything from anatomy of a slackbuild script to his thoughts on the OLPC.

Lastly, I revealed the inaugural "Hit and Run" segment. Pure bliss ;-)

I still have not setup an 'ogg' feed, it is forthcoming. Thx for your patience. I'll likely use Listgarden to resolve this matter. In the meantime, feel free to simply download directly.


Download Ogg (67.10min || 36MB)

Download mp3 (67.10min || 21MB)

Shownotes:

Robby's Blog.

While I'm sometimes forced to use unprotected wi-fi hotspots when on travel, I do so without much trepidation. Most people complain about the complexities of using a VPN. Frankly, if you have remote access to Unix or Linux box that is running a ssh server you can essentially gain the same benefit that a vpn system can afford you.

A lesson on the many different ssh 'flags' would be beyond the scope here; however, you can tunnel most TCP/UDP based applications via SOCKS v5. I happen to run privoxy web proxy and tor on my box at home.

So if I setup my localhost (in this case my linux notebook) accept a tunnel from my box running openssh, I can tunnel all http traffic through this makeshift tunnel.
Since privoxy server listens on port 8118, I setup my tunnel as such..
ssh -NL 8118:localhost:8118 user@host (assumes ssh running on port22 - not advised)

Below is the output from 'netstat -tuap | grep 8118'
tcp 0 0 localhost:8118 *:* LISTEN 13188/ssh
tcp 0 0 localhost:8118 localhost:47018 TIME_WAIT -
tcp 0 0 localhost:8118 localhost:47019 TIME_WAIT -
tcp6 0 0 ip6-localhost:8118 [::]:* LISTEN

So now I still have one more step to get the benefit of privoxy and tor on my notebook.
If you're running Firefox or any Mozilla browser (I'm not sure if IE understands SOCKS), you simply need to do 'edit -> preferences -> network -> settings' select radio button for manual proxy settings.. Add localhost (127.0.0.1) and port 8118.

Now to tunnel TCP traffic via ssh - ssh -D 9999 user@host (again assumes sshd is running on port 22). The 'D' flag or switch tells ssh to tunnel SOCKS. on port 9999.
You would then add this information to the manual proxy settings as we did in the previous step. You should now notice the same benefits as you were running them on your local box. For people forced to run M$, fear not you can also realize the same benefits by using the putty client. However, you still will need access to box that is running openssh on the other end of the tunnel. I don't think that W2K3 server can run openssh natively. So you'll need Linux box. Get with the program ;-)

Boot Loader Polka

Last month I tried unsuccessfully to setup GRUB on my shiny new Slamd64 box. As most devout slackware users will attest, LILO works just fine and is the default bootloader on the Slackware distro. The only reason I decided to delve into GRUB was Dann Washko of TLLTS, and also the fact I'm insanely curious. I'm of the opinion that you never stop learning, so I set out to begin preparing for a GRUB install.

As stated previously, GRUB is not the default bootloader for Slackware, but you can obtain it within /extra on any Slackware mirror. So, once I finished my slamd64 install I chose not to setup LILO configuration. I knew that I could always boot from my install CD and drop down into telinit 1 (single user), so that I could fix things if stuff broke.

As luck would have it, I was not able to get beyond stage 1.5, as the bootloader would dump me into the GRUB shell. Not so bad I thought. The dreaded error 28 - not enough memory to perform this function. Since this box has 4GB of RAM, I was somewhat confused as to why I was getting this error. After poking around and several reboots, I decided to re-install GRUB.

Actually, I began to get a different error after re-installing GRUB. This time Error 15 and it would just hang the system. Quite annoying. I spent a fair amount of time trolling #grub, #slamd64, and of course #lottalinuxlinks. Interestingly, some of the people forewarned me about grub on a 64bit platform. I was even advised to use grub2. Of course grub2 is not yet available slackware-current or any slamd64 repositories. In general, it does seem that grub is not the bootloader of choice for many slackware users. After this experience I can understand why ;-)

Finally, I booted with the slamd64 install CD and was able to get into single-user mode and then discovered that the root partition was thoroughly munged and could not be mounted without my intervention. Apparently the superblock had become corrupted, and could not be read. It is likely that I created this condition while repeatedly removing/installing grub on the MBR. Who knows the root cause (no pun intended). All I know is that I never had this sort of problem with trusty and well understood LILO. I don't care how outdated it is or its limitations. It just works when you install it. It was funny to read the comments in IRC as I was learning about GRUB. Of course, I had to setup chroot environment so that I could modify the appropriate partitions, which were obviously not mounted by the install CD. The rootfs of the install CD certainly was not area of focus here ;-) Hence chroot was required. Well beyond what grandma moses or the average user would be doing..

Back to those IRC comments -- Stuff like "Wow, do you really need to do this with a modern OS?" Yes, I am a glutton for punishment, but at the end of the day I am not a masochist. The whole point of installing grub was to learn about a new bootloader. Nothing more, nothing less. I suppose the corrupted superblock spoiled my interest in going any further.
Try as I might, I was not able to recover all of the missing superblocks. I tried '/sbin/dumpe2fs /dev/sda3', so that I could actually see the missing superblocks and the associated inode allocations. For those who run other OSes.. Think of corrupt superblock as a badly fragmented filesystem on a hard disk. Every OS offers tools to defrag a disk, well Linux offers several fairly robust tools. e2progs is one of the more complete set of tools. I was able to confirm the block size of the inode 'tune2fs -l /dev/sda3'
Another nifty trick was to backing up all of the pertinent missing superblocks temp file. Running '/sbin/mkfs.ext2 /tmp/foo' essentially backed up superblocks to a temp file called foo. For the experienced Linux user, you'll wonder why ext2fs is used in this instance, as it turns out I'm actually using ext3 filesystem, but it makes no difference.
Unfortunately, running 'e2fsck -y -b ' didn't yield favorable results. The '-y' switch was particularly helpful though, as it simply spared me from having to acknowledge the e2fsck script by pressing y after each superblock was identified and repaired. After all I had at least 20 missing superblocks. I eventually cried uncle and re-installed slamd64 and setup LILO as I should have done in the first place :-)

All works well now, and I cannot honestly blame grub for the problems. However, I suppose that I confirmed the old adage, if it ain't broke don't fix it.

Interroperability rears its ugly head

A couple weeks ago I spent time installing a new kernel and re-configuring file and print services for a pseudo-client. Samba is the always the preferred tool for getting M$ clients to play nice with *Nix machines. In this case, I was also asked to configure a Windows Vista machine for use with the file server. During this exercise, I discovered the most annoying feature of Vista. The highly irritating UAC.
Apparently it was designed to annoy you.

Perhaps the most interesting aspect of playing with Vista is that in typical fashion, with each new Windows release, common administration tools get removed or slightly modified so that they cannot be easily discovered. I probably spent 10-15 minutes looking for the application to provide me with a dos shell. For whatever reason, it Redmond engineers have decided to remove the 'run' command from Start Menu.

Not sure what benefit is gained by changing the administration interface, but I would imagine that some software focus group deemed it necessary. Anyway, after performing the ubiquitous GOOG search, I was able to figure out how to restore the 'run' command to the Start Menu. I needed this run command, so that I could launch DOS shell window. The DOS shell window is my most trusted tool when working working on network problems on M$ boxen.

I was able to ping the other machines on the network, but was not able to make this Vista notebook map shares on the Linux server. After doing a bit more digging, I discovered that Redmond engineers once again modified the security policy on their network stack. The unfortunate consequence is that Vista boxes will not communicate with common SMB protocol based machines. The oddity is SMB (aka CIFS) is a Redmond invention, go figure. on Vista. Hmmm, I wonder why they would do such a thing?

Apparently, you can modify the secpol.msc to something that samba server can understand. Modifying the registry is the only solution to this problem.

The entire evolution took much too much time. I eventually had to give the Vista machine back to its rightful owner. It is apparent that Vista was an unmitigated disaster, one need not look any further than the Windows 7 hype machine that is brewing, just 1yr after Vista launched.

Backward compatibility and playing nice with other computers on the network should be high priority for Redmond; however, the "we're open source too" rhetoric seems to much of the same ole crap from the woolly mammoth.

Importance of Loopback Device

There are times when I simply create problems unintentionally. I spent probably close to an hour wondering why I couldn't bind to localhost to some arbitrary port. It never occurred to me that 'lo' was missing. Playing around with Debian / Unstable I noticed that networking seems to be handled differently than on Slackware.

While I know that Debian is most similar to SysV and Slackware has always modeled the BSD system of start-up scripts. It sure would be nice if networking was handled in some uniform way on all Linux Distros. Though this issue is not central to the problem that I experienced a couple weeks ago, nonetheless, it can be annoying. For instance, when you wish to make sure that your DNS information is set correctly the average CLI junkie will simply fire up your favorite editor and modify /etc/resolv.conf and modify nameserver and search path and IP address as appropriate. I believe this is universal on all distros. What I have noticed is on Debian systems, it appears that 3rd party programs can utilize wrappers to prevent direct editing of this file without using some weird switch.

Clearly a different behavior, that I have never witnessed on a Slackware system. Perhaps this is because I never needed to use any sort of wrapper on a Slackware box? Who knows? Very strange, indeed.

Anyway, to address the loopback problem, on my Debian system I simply ran 'ifconfig -a' as root and noticed that ' lo ' was missing. As root running ' ifconfig lo up ' solved the problem.
So now I can bind to port 8118 and TOR / Privoxy play quite nicely.

Random Shots

I intentionally let a month elapse since my last entry, as I was determined to upgrade my publishing engine and make some radical changes to the style-sheets.
Though the publishing engine has been upgraded to MT 4.2, I'm still playing around with styles. It does appear that some things are broken, as I am not able to preview the styles from the MT 4.1 Library. Below are the errors that appear when I attempt to pull in styles to review.

Error loading themes! -- {"error":null,"result"......
Additionally, I get a very nice core dumps caused by mt.cgi. I suspect these problems are created by a broken Image::Magick or Perl::Magick module. Once I get all this sorted out I'll share all of the sordid details. I can already appreciate the improved features and more sophisticated underpinnings of MT 4.x. It is magnitudes better than earlier versions of MT.
More on this later.

-rw------- 1 xxxxxx xxxxx 20426752 Oct 10 17:06 core.1000
-rw------- 1 xxxxxx xxxxx 25075712 Oct 10 20:04 core.14385
-rw------- 1 xxxxxx xxxxx 19509248 Oct 10 17:49 core.20653
-rw------- 1 xxxxxx xxxxx 25513984 Oct 10 14:51 core.25901
-rw------- 1 xxxxxx xxxxx 24383488 Oct 10 14:00 core.29407

Anyway, I'll be spending much of the day at the Ohio Linux Fest. I'm looking forward hanging out with my fellow geeks, and talking about FOSS. I expect to meet TLLTS guys and perhaps even Dave Yates and Chess Griffin.


6 Minutes of Fame

As part of the call-in segment, I made a brief appearance on TLLTS last night. I probably rambled on much too long, but it was good to talk the hosts. Actually, I'd briefly met Linc at the OLF last year. As mentioned earlier, Dann helped me out with a couple Asterisk issues that I had during the earlier part of this year. Additionally, he listed my netcast among the TLLTS community shows and it certainly increased my audience. It was good to thank him publicly. I also discovered that Pat was from Shaolin, err I mean Staten Island. Outside of Wu, I don't know of very many people from Staten Island. His accent is refreshing for sure :)

It also appears that I met a few friends on the IRC. Couple of headz from MI. and of course Dave Yates of Lotta Linux Links fame.

Regarding "AG Speaks".. The show is still alive, I am in the midst of rebuilding my workstation and cleaning up my lab. I would rather do the post-processing work on a faster machine. Eventually, I will share the specs of the new box in a future post.

Thx to everyone for your patience and interest..

It had been awhile since I'd had a client. Obviously, one of the perils of working a full-time jobe is that you tend to get a bit complacent. Furthermore, your entrepreneurial zeal may fall into question. Nonetheless, I do enjoy unfurling the F/OSS banner. These days, I don't have much time to administer M$ installs, they really take more time than its worth. Most of my clients are running XP and they'd ask me to come make the trojans go away or help fix a corrupted Registry.

Well in this particular case, I was tasked with building an affordable box for a client. The constraints were numerous, but most important was keeping the price below $300 US. I figured this would be fairly easy. I told the client that I would not install XP as it would likely cost her at least $300 to maintain it over the next 3yrs. Besides M$ will not be supporting it much longer.
Vista was totally out of the question, as the hardware requirements would easily eclipse $300 bucks.

First Contact with FreeBSD 7.0

| 1 Comment

As stated previously, I had been searching for some alternative hosting solutions for my netcasts. More specifically for the numerous OGG files that I have accumulated. Thought these audio files are a lossy format, they are of a higher quality than the mp3 files that I also provide with each show. A good friend has graciously permitted me to store these audio files on his community server. I am forever grateful. Actually, I spent a fair amount of time researching the various options as hosting solutions. These days I'm less interested in simply dumping my content in the cloud and allowing a 3rd party to simply manage my content. I looked at libsyn, but their service uses FTP, which IMHO is a very poor security model. Never liked the idea of passing my passwd in clear text over the internet. There is no such thing as a throw away passwd. If that is the case, why even create one? I digress.. Gotta have SSH tools at my fingertips. Eventually, I will begin serving up RSS feeds for my OGG content as well.
I'll likely use Listgarden to generate the appropriate XML tags, and then point them at Feedburner.

Though it may seem odd that I had never delved into FreeBSD prior to this excursion, I treat it as an opportunity to learn a bit more about some of the similarities to SunOS and Linux. Besides, I have been a Linux head for 12yrs, all Unix-like systems have some similarities.

One of the first differences is the default shell. I was presented with tcsh, which I had not used since undergrad. I have been so comfortable with the Bourne Again Shell (bash), that I never give much thought to the shell environment. I don't have root on the box, so I was resigned to customize tcsh. Taking a quick look at the package management directory ' /usr/ports/..'
there are plethora of packages available for the platform. Who said there was a dearth of software packages for FreeBSD? I was pleasantly surprised

I did add a bit of customization to my .tcshrc file. Since I am a CLI junkie, it is paramount that my shell prompt be useful. Though, I did have bash installed I figured that I would resist the urge to change the shell. I can go back to my SunOS 4.1 experiences on the Sparc 1 (pizza boxes) workstations of yesteryear :)

set prompt = " %B[%@]%b %m[%/] >"
# start bold[time] end bold hostname [current working directory] >

The BSD startup scripts are identical to Slackware convention. In fact, if you go back far enough into the history of BSDi, you'll find that Pat Volkerding and the rest of the Slackware crew shared code and probably an office too :)

As and aside, I was looking at the ssh daemon script on my fileserver which is running Slackware 8.0 (2.4.10) , and I noticed that Theo DeRaat de Raadt was the author. Interesting bit of trivia, but it also revealed that I was several security patches behind too. Obviously that is not good, and I resolved that immediately.

Obviously, there is still much to discover. I have only scratched the surface, and I have much to learn. As I come across interesting stuff, I'll likely share my experiences.

Cloning vs. Innovation

While perusing some of Dave Winer's very early writing I found this snippet. Though he was talking about his work with the RSS protocol, I still found it intriguing..

Read Dave's entire post here. Checking in with Mr.Safe

..Nice idea, but format and protocol design doesn't actually work that way no matter what some open development advocates say. They're mostly well-intentioned people, many of them users like Larry Lessig, who want software to work for them, without the usual tricks that software developers play to lock them in. I share that goal, totally. But people like Eric Raymond and Richard Stallman have told them that they have figured out how to design software without a designer, but unfortunately their technique only works for cloning ideas that have already been designed..

I suppose there is some truth to the above statement. Instead of cloning, I would say assimilation is more appropriate. I have heard that F/OSS collective rivals the Borg. Resistance _is_ futile, believe that :)

All joking aside, to a greater extent everyone does a form of derivative work. Pure innovation can be quite expensive. Not too many people write software from scratch. Hence the reason that the Cupertino and Redmond giants more often than not, swallow and assimilate, rather than innovate.

I suppose what sets F/OSS apart from proprietary software models is the idea of the "release early and often" mantra. A product development analogy would be Kaizen, or constant gradual improvements. The ability to execute this concept has given the Japanese automakers a distinct advantage over the more seasoned US counterparts. Not many people call Honda or Toyota copycats anymore.

Still more folks seem to subscribe to the premise that F/OSS does not innovate.
Here is another excerpt that I stumbled across.

In his Info World's "Open Sources" column, Savio Rodrigues responds to Jaron Lanier's views on OSS..
OSS Does Innovate

Some of you may have seen this article in Discover Magazine by Jaron Lanier. I find it difficult to argue when someone challenges "OSS obvious truths" because doing so takes some degree of professional courage. Jaron writes:

Twenty-five years later, that concern seems to have been justified. Open wisdom-of-crowds software movements have become influential, but they haven’t promoted the kind of radical creativity I love most in computer science. If anything, they’ve been hindrances. Some of the youngest, brightest minds have been trapped in a 1970s intellectual framework because they are hypnotized into accepting old software designs as if they were facts of nature. Linux is a superbly polished copy of an antique, shinier than the original, perhaps, but still defined by it. Before you write me that angry e-mail, please know I’m not anti–open source. I frequently argue for it in various specific projects. But a politically correct dogma holds that open source is automatically the best path to creativity and innovation, and that claim is not borne out by the facts. Why are so many of the more sophisticated examples of code in the online world—like the page-rank algorithms in the top search engines or like Adobe’s Flash—the results of proprietary development? Why did the adored iPhone come out of what many regard as the most closed, tyrannically managed software-development shop on Earth? An honest empiricist must conclude that while the open approach has been able to create lovely, polished copies, it hasn’t been so good at creating notable originals. Even though the open-source movement has a stinging countercultural rhetoric, it has in practice been a conservative force.
The fact that many "sophisticated examples of code in the online world" are of the commercial software kind, and not OSS, is simply because the vendor felt they could grow and be profitable without open sourcing the product. In some "innovative products" such as Joost or Skype, the open/closed nature of the underlying software is of little concern to the users. In other cases, such as RIM's enterprise software, users may prefer a more open product, like Funambol, but are willing to trade openness for a product that just works.

When a vendor has a truly innovative product, they do whatever they can to increase their return on investment. In most cases, this means that the source code isn't released. The conclusion is not that OSS projects don't innovate. Rather, that projects that are truly innovative are developed by vendors whose benefactors (VCs or Wall St.) want the biggest bang for their investment. Ipso facto, closed source is usually the path taken in these situations. This has nothing to do with the type of innovation that OSS can deliver....

Rodrigues hit the nail on the head. I couldn't have stated it better, closed source companies are seeking a competitive advantage for the benefit of shareholders. Not necessarily the benefit of their consumers. Very different from the Bazaar model that ESR describes in the Cathedral and the Bazaar essay..

More Trixbox Musings

Some weeks ago I had listened to an episode of TLLTS, and discovered one of the hosts extolling about the virtues of Asterisk. So, I figured that I'd visit his blog and to my chagrin I could not find any helpful war stories. The hope was that I would be able to learn from his past experiences.
My previous struggle was putting together a working IVR and functional conference room within Trixbox. Both items are fairly trivial; however, I seemed to struggle mightily.

Well, I fired off an amusing email to Dann and he got back to me. We setup a time to meet on the IRC during the week. Once we connected on the IRC, I explained the issue was having with setting up the conference room. For whatever reason, I was under the impression that you needed to have a landline or at least zaptel card to setup a working conference.

He explained that although a zaptel card is not required, you still need to load the 'dummy zaptel' modules. Hmm why didn't I see that in any of the documentation. So I load the 'ztdummy' modules and low and behold... I have a working conference, complete with music and everything. So, I know longer have to use freeconferencecall.com services anymore. I can do all that I need on my own Asterisk box. Dann and I probably spent an hour or more on the phone testing out the IVR and conference. He was even nice enough to revert his currently running slackware based asterisk box back to an earlier trixbox setup, so that he could recall the appropriate settings. I suppose he had hot swappable disk or virtual instance of Trixbox running on his server. TLLTS conducts all of its netcasts via trixbox, in fact, they were an inspiration for my setup. I too plan to conduct netcasts via my trixbox.

The entire experience brought me back to my earlier days working with folks in the Linux community. Each one teach one. Hack to you drop and then teach some other people to do the same. Mad fun..

Slug Trevails

A couple of months ago, I decided it was time to deploy a robust backup strategy for my home LAN. The problem that I have is that I have exhausted the disk space on my fileserver. The box is running Slackware 8.1 (2.4.10 kernel). I had thought about installing backuppc on that box, but it required Perl 5.8.1, modern glibc, rsync, par and upgraded tar. On a very old and stable Slack box, upgrading Perl and glibc were show stoppers. I suppose a CPAN upgrade and install could've been possible, but upgrading glibc raised the hairs on my neck. Wasn't something I wanted to tackle. Borking glibc libraries could really screw up a perfectly good box. Perhaps there are slackpkg or slackbuild binaries for Perl, but I was not able to find them at the usual places.

Try as I might the under powered Celeron 300 MHz CPU was unable to compile Perl from source, in fact the machine quickly overheated and shut down. Installing a more modern glibc essentially required upgrading to Slackware 9.0, which I did not believe was worth the effort. So enter, the Linksys NSLU2 and the very compact and inexpensive (minimal impact on environment). This little device is just what I needed. I grabbed a 500GB external drive and proceeded to install Debian Stable (Etch).

One can find these devices on E-Bay for $50 - $70 US. Just another reason to give it a try, besides I really did not need an excuse to geek out on some new hardware. To install Debian on the Slug, it was necessary to flash the firmware on the device. Gotta give credit to Chess Griffin, as his very helpful and inspirational netcast inspired/motivated me to tackle this project.

Humorous Capitulation

| 1 Comment

I know that I am late, but I could not hold back on this one. Roughly four years ago, my buddy Keith had become one of the most vociferous Apple fanboys. He attributed this odd behavior to his frustration of stuff simply not working on the Linux platform. Mostly he was tired of reading, hacking and discovering. I was very sad for him, as he began to sound like an old man ;) I publicly called him Judas when he decided to leave the Linux community and become an MVP.

Actually, I have mellowed much since then. I am better now. What is it about the Apple platform that makes people overspend and subject themselves to so much abuse?
Well I do understand the idea of lifestyle products. Steve Jobs has worked very hard closing down the architecture, so that you must love his art or else. FWIW, his art is quite eyecatching, albeit costly. A friend recently mentioned something about the uniqueness of the powercord on Macintosh computers. Actually, I had never given it much thought. I suppose the concept is quite trivial. The device attaches to a wall via a magnet. If someone accidently trips over the cord, it decouples from the cable without dumping your overpriced notebook on the floor.

Simple enough. Why don't you see this "innovation" on PC hardware? Well it's called economies of scale. Apple only deals with a handful of suppliers, while Dell probably has hundreds of suppliers. Apple is a hardware company, while Dell is a logistics company. Both do an excellent job of walking lockstep with the vendors that place their components inside of the case that ultimately becomes a desktop or notebook computer. Basically, Apple owns the entire computing experience, so it _ought_ to just work out of the box. In fact, the Mac is so closed that you even have to buy a set of golden wrenches just work on the hardware. Help me understand, I just spent thousands of dollars on some hardware, and you tell me I can't work on it? Ridiculous.

What annoys me most about people who try running Linux for the first time is that they expect the same experience. They exclaim "Linux isn't ready for the laptop or desktop !" I retort Unix/Linux is user friendly, it is just particular about who its friends are...

Basically, Linux is nothing than a powerful kernel with some excellent GNU tools. Now when you start talking about the plethora of distributions (Slackware, Debian, Gentoo, Fedora, etc...) then you get tons of software. Hence a workable computing experience. Linux is not a software or hardware company. There are no promises when you download or buy a shrinked wrapped copy of your chosen distro. The only constant is that you will likely learn something new, and most importantly you have the freedom to do as you wish. Some might argue that Linux does not have legal codecs.. In some cases that is true; however, I do not lose much sleep over it.

These problems will be sorted out eventually. Now, Mr. Elder makes some great points about the Apple history and the whole Power Computing (read: eating your young) blunder. I believe the broader issue is economies of scale. If Dell decided to add some of the unique and rather exotic hardware to its platform, the added costs would certainly be passed onto the consumer.

Personally, I look at a computer as a commodity that is critical for learning. Not some luxury item that my people should drool over or wish they could whip my ass and take it from me. That said if you're in the Apple camp, you will continue to get raped with over priced hardware.
But you knew that already :)

Hardware gone bad

I spent a couple of days rebuilding my firewall. Normally, the evolution would have been complete in just a couple of hours, but you must realize that I have been running my < href="http://smoothwall.org">Smoothie since 2000. Hence, the case and power supply were both an outdated AT form factor. The obvious challenge would be to find a replacement power supply. No such luck. In this area, there simply aren't that many privately owned computer parts supply stores. In NY and NJ area they are quite plentiful. So, I actually went to Worst Buy and grabbed a Linksys router. Took it home and then decided to set it up. Low and behold it, you put it on the network and it is not recognized due to 192.168.1.x octet, whereis my LAN is on the 192.168.0.x subnet.

I decided that it was too much trouble to configure the silly router, I returned it. I simply bought an ATX case and power supply. Smoothie is up and I hope to run it for another 7 yrs. Though Linksys routers run embedded Linux kernel, I still prefer the smoothie, as it is very configurable. Besides, you can always re-deploy a case and power supply for another server. You cannot very easily re-purpose a Linksys router without a some hacking.

It is worth noting that when you decide to run Linux on any piece of hardware, it is likely that the software will outlast your hardware. In the M$, cycleplan the contrary is true. You will likely be forced to run out and purchase new hardware on a three year cycle. One last nicety that Linux affords you. I have found that it is quite easy to swap out hardware between machines, without reloading drivers or any other weird catastrophic failures. For instance, if you discover a failed motherboard on another machine, you simply remove the hard drive and install it into another box. No need to worry about installing drivers or looking for silly software license keys.

This is feat is virtually impossible on a windows box, as drivers and all sorts of user space applications write directly to the registry. Which basically prevents flexibility and portability in a pinch.

Trixbox 2.2 - Revisited

Figured it was time provide an update on my Asterisk excursion. As mentioned previously, I decided to build my own PBX, so that I could gain more control over conference calls and standard voicemail box. Previously, I paid $15 US per month to lease a voicemail box with a toll-free number. The voicmail was necessary to manage call volume for Real Estate. Now Asterisk has provided a very powerful means of not only maintaining a voicemail, but IVR, and a large number of other telephony niceties.

Sure you could accomplish some of these tasks by paying Vonage, Comcast or some other ITSP, but then you would lose customization options. Moreover, Vonage is in serious trouble. I elected to use Broadvoice as my ITSP, as they have BYOD option that suits Asterisk users quite well. Their are other options too. As the telephony space becomes more mature, their will likely be a plethora of alternatives.

Trixbox is a fairly nascent telephony/CRM distribution developed by Digium (chief supporter of the Asterisk Project). I still have much to learn and luckily there is a plethora of information available through several wikis and IRC. A good book to grab is the O'Reilly Asterisk - Future of Telephony. Great reference text to supplement what you will find online. I will likely dedicate a substantial portion of a forthcoming netcast to my Trixbox excursion. I was somewhat surprised that their were not many people in the tribox channel.

As mentioned in a previous post, Trixbox provides a number of different tools to help build a formidable PBX. I have spent most of my time with the freepbx module.

In telephony speak, I have two SIP trunks which are dedicated to both of my SIP softphones (Ekiga).
Both trunks have dial plans for both incoming and outgoing calls. Eventually, I'll add another trunk for my wireless IP phone. If I had landline phone, I would have to add a Zaptel trunk as well. Actually, there is no limit to the number of trunks that you can create. For instance, if you were making international calls, you could configure a outbound dial plan for those country codes too.

Word of caution, since we live in a NAT'd world you will need to punch holes in your firewall. You must do this because most firewalls do not pass SIP or STUN packets natively, typically opening port 5060 will solve that problem. I have read that some people have placed their Asterisk boxes in DMZ portion of their network. Unless you know how to disable all non-essential services, you could be creating unwanted problems. Probably safer to stay behind the firewall. You'll also need to allow RTP traffic on ports 1000 - 2000, especially if you wish to be able to connect to your trixbox while you're on road. In my case, I plan to use a wireless IP phone to connect to the box and place calls to whomever. The wireless IP phone basically hunts for open WiFi networks and grabs an IP address to negotiate a connect. Pretty slick.. I will also get around to registering with FWD, as I think it would be great to make calls directly to other PBX machines without using a third party (PSTN)to bridge the calls.

What I like most about Asterisk Project , is that it demystifies telephony and removes that black box from the technology. Heh, is that the not the intent of FLOSS anyway? As I discover more useful features and gain more knowledge, I will report my findings.

Now I have only three more projects remaining. Finish my thesis, configure/install MythTV, and install Debian on NSLU2.. More fun ahead.

Ohio Linux Fest 2007

Couple weeks ago, I attended the Ohio Linux Fest in Columbus, OH. Unfortunately, I spent just one day at the conference. I drove up with one of my buddies, and we got hoodwinked by some incorrect Mapquest driving instructions. I really need to invest in a real GPS system. I completely forgot to bring my Delorme Earthmate GPS, I have not road tested with gpsd on my Debian Etch notebook. I suppose that in a pinch, I could have used the mobile Google Maps, but I am not particularly happy with the inevitable timeouts that seem to plague the EDGE network. So I toughed it out with Mapquest, and I was told to make a right, when I was supposed to turn right. The detour cost me about 30min. The ride was about 3hrs.

Upon my arrival I eagerly immersed myself in the goodness that is Linux ;)
The first person I ran into was Jon Maddog Hall, who briefly chatted with me as he was running to his next session.

I attended the following sessions:

  • Puppet admin tool
  • GNOME conduit
  • Birds of a Feather - Asterisk
  • Computing off the Grid - Jon Maddog Hall
  • MythTV - Jeff Price (Novell)
  • Software Freedom - Bradley Kuhn

I was also fortunate to run into a Django developer who was gracious enough to help me hack up a script to solve a nagging problem. I have attended five Linux conferences and I generally prefer shows which encourage discussion among the participants. OLF reminds me of the now defunct Atlanta Linux Enthusiast (ALE) show, which actually was my first Linux conference. Although, there were vendors, the main focus was the participants, there was no cost for attendance. The conference promoters allowed people to pay an optional fee in exchange for some special conference goodies (ie official conference shirt, lunch, nametag).

I especially liked Maddog's talk, MythTV, and Kuhn's keynote. In fact, I was given the mic to ask a question during Kuhn's keynote. Hopefully, I can get the audio and share it with you..

Skype violates GPL

Interesting article which seemed to slip in under the radar..

Skype Violates GPL

I wonder how many other instances of this infraction have gone unpublished or discussed? The forthcoming final draft of GPLv3 should begin to shed light on these issues. I would imagine that the FUD credits that Redmond wooly mammoth has been doling out will come to a screeching halt.

Licensing in the Web 2.0 Era

Eben Moglen, Director Software Freedom Law Center, attempts to educate Tim O'Reilly on the importance of Freedom.
From the O'Reilly Radar Executive Briefing, Tuesday, July 24 at the O'Reilly Media Open Source Convention.

Benevolent Dictators in FOSS

This entry was inspired by my buddy Michael Kimsal, actually I had been thinking a bit about how personalities inspire or retard the advancement of various open source projects. For instance, reiserfs was my first experience with journalized filesystems, in fact it has always been my filesystem of choice. It plays well with NFS and it has always been reliable. Nonetheless, the future of reiserfs is in jeapardy due to the ongoing legal problems of its lead developer. It also seems that the lead developer ruffled the feathers of many of the kernel developers, who would be principally responsible for merging the reiserfs code into the Linux kernel. I do hope the community and Namesys (commercial entity sponsoring reiserfs project) can come to a very agreeable compromise. Similarly, I have witnessed Jörg Schilling and Nemosoft, both of these developers produced work which allowed me to be productive. Cdrecord and the pwc driver (Philips ToUcam) respectively were stables for my Linux desktop. Unfortunately, both left the community under dubious circumstances. For whatever reason, these guys had a tough time dealing with Linux kernel developers. Luckily their code still remains. Hence the goodness of FOSS ;)

I would have to agree, personalities of lead developers typically can make or break a project. If the person is amicable, or at least accessible it really goes a long way for adoption by end-users and also developers close to the kernel. Personally, I have used Slackware for number of years because I was able to make a connection with the community and a few of lead developers. The experience has been immensely gratifying. No, I've never sat down and had dinner with Pat V, but he is accessible and even responds to email.

When I was first introduced to Debian in 2000, I was fairly new to that community and quickly learned that you really must RTFM:) Once I began to ask better questions, I realized the mailing lists weren't that bad after all. Some years later, I discovered that some of the Debian developers maintained blogs and hung out on IRC. So I began to leverage that communication channel.

In a nutshell, placing the face with the package or the distribution will help engage people to the project. If the benevolent dictator has any charisma or at least is accessible, it then becomes easier to form a virtual bond or network with that individual. This theme has been pervasive inside the FOSS community. The advent of social media has made this even more possible than during the very early years of simple pseudo public mailing lists.

I understand that some Redmond developers are now blogging too. Somehow, I don't believe that average Windows user is very interested in having direct communication with the M$ developers. Methinks this is because *their* community was not built upon sharing and transparency. FWIW, this probably applies to Apple too. Those communities are largely left to fend for themselves. What usually happens is that an ecosystem of third-party developers is created because of the closed nature of Redmond and Cuppertino. Buggy software or an unscratched *itch* is solved via shareware. If the end-users of those communities are unwilling to pay they simply suffer. That is the law of the land, or least that is how I see it :)

Foray into Backports

I was introduced to the Debian backports some months ago, but did not have time to discuss my experience. Essentially backports provide the ability to use features and software drivers on releases which typically do not support exotic hardware.
Why is this important? Well, if you are running a production environment, it is unlikely that you would risk running software from the unstable branch of a distribution. So, developers provide a means to port some of the goodness of these later releases back to some of the stable distributions.

In my case, I was doing a bare metal install of Debian Sarge, but I had a need for SATA support and a few other specialties (Dell PowerEdge 830). So I stumbled upon this backport. It was truly helpful. The onboard NIC and the SATA drives were detected with no problems at all.

I am clear that Debian is not the only distribution which backports are available. I have heard that xBSD has practiced this philosophy for quite awhile.

Gnucash 2.1.3 - Slackware

| 2 Comments

Recently finished installing Gnucash on my aging workstation. It does seem that I'll need to update the motherboard and processor in my box. I'm running a very old Asus A7M266-D board with a modded Athlon MP. I long for a dual-core 64bit workstation. My shiny new Acer Aspire 5100 is not dual-core, though it is 64bit.

Well, for those of you who are not familiar GnuCash is probably one of the older, totally free (as in free speech) personal and business finance software packages. It now has been ported over to the win32 platform. It is fully compatible with OFX and HBCI formats. It will also read Quicken (.qif) and M$ Money extensions.

Traditionally, installing GnuCash (and most Gnome pkgs) had been the huge number of dependencies. In the past, I avoided the problems by using FreeRock Gnome, Dropline Gnome and even Gware. All of the aforementioned packages provide a fairly standard means of installing Gnome packages on a Slackware machine.

This time I used Dropline Gnome 2.18.1 beta, so that I could get all of the required Gnome libs. This wiki was helpful. I grabbed Gnucash 2.1.3 tarball, unpacked and installed the pkg.
Please be advised that the odd series is considered unstable (ie 2.1.x) branch.

Below is a summary of the new features as listed from GnuCash project page.

GnuCash 2.1.3 released

The GnuCash development team proudly announces GnuCash 2.1.3 aka "at last!", the fourth of several unstable 2.1.x releases of the GnuCash Open Source Accounting Software which will eventually lead to the stable version 2.2.0. With this new release series, GnuCash is available on Microsoft Windows for the first time, and it also runs on GNU/Linux, *BSD, Solaris and Mac OSX. This release is intended for developers and testers who want to help tracking down all those bugs that are still in there.

DATA FILE NOTICE If you are using Scheduled Transactions, the data file saved by GnuCash 2.1.2 and higher is NOT backward-compatible with GnuCash 2.0 anymore. Please make a safe backup of your 2.0 data before upgrading to 2.1.2.

I have used GnuCash for about four years. The package has vastly improved. They have been using Gnome 2.x libs since GnuCash 1.8.x. Perhaps the only feature that I wish could be added is sync with PDA (ie Palm OS), hence, I could easily transfer the data from my Palm based P-Cash to my desktop (street spending adds up).

It appears that this feature has been on the project roadmap for awhile. Maybe I need to write one? Try it out, it's a good software package.

Trixbox 2.2

There are some nagging projects that I have neglected to complete, due in part to coursework and other distractions. One such excursion is installing a home PBX. In truth, I have not owned a landline for nearly four years. When I purchased my home, I made a conscious effort to go strictly wireless. That is use my cell as the primary means of contact. In my mind, it makes no sense to pay for services twice. Simply wasteful. Besides, the landline phone would only serve as a spam filter for unsolicited telemarketing.

Another impetus for installing Asterisk, was the idea of conducting more robust netcasts. Currently when show guests opt to use a telephone, we end up using some free conference calling software. The end result is that the audio file is left on the foreign server. Though, I can easily download the file to my LAN, I would prefer to have more control over this process. Another benefit is that I would like to setup IVR and voicemail for my home office and I have no desire to use Vonage(soon to be defunct).

I understand that Mark Spencer, founder the Asterisk project has made great angst to simplify the installation. There have been a couple of live-cd installations floating around (ie AsteriskNOW and Trixbox -previously Asterisk@Home).

For whatever reason, I was not able to install AsteriskNOW. It simply would not write to my hard disk. It seemed to bomb when it could not find a raid array.

Anyway, I grabbed Trixbox and quickly discovered that it was based upon CentOS 4.4.
While I typically avoid RPM based distros (actually I have been using SmoothWall firewall/gateway for years without issue and it's essentially a minimalist Red Hat distro), I decided to proceed.

The installation was fairly painless, despite the repeated reboots after grabbing the required files from the yum repositories. Not sure why the automatic reboots were necessary. It seems that the installation requires a larger portion of the base CentOS
package. Stuff like bluetooth, kerberos, libselinux, cups, etc. I was a bit surprised about the need for all these packages for a PBX install. Nonetheless, the install took place without any problems after the base Asterisk install was completed.

The GUI, contains a collection of options. An explanation of all of these is beyond the scope here, so I'll just mention that I chose FreePBX.

There are a number of tutorials which will step you through a Trixbox install.
I used these trixbox without tears and Linux Journal article. In truth, there is still a fair amount of configuration that I still must complete. I do want a high level of customization. As with most projects, this will be a work in progress.

Anatomy of Hack (Revisited)

It appears that a box that I administer for a friend was compromised. Seems that the some script kiddies launched a dictionary attack against the ssh daemon. Yep, I was careless and stupid. Luckily, these crackers only wanted to run an IRC relay. After using a brute force method of gaining root access, they simply installed the script in /root. It seemed odd that running 'ifconfig -a' would yield eth0:1 ... eth0:295. Not good.

I told my friend to shutdown the box immediately and pull the hard drive. We later reinstalled the OS (it was previously running unstable/testing sarge). Once Debian Etch was installed, I immediately modifed /etc/ssh/sshd_config to _not_ allow root login and to listen on a port other than 22. I also disabled password authentication, now only approved keys can be used to gain access. Problem solved.

Strip mining at the FOSS quarry (revisited)

Recently listened to an interesting discussion about MacFUSE on ITConversations. I have been playing around with sshfs/FUSE for a short period of time. While FUSE is nothing new, it is quite compelling, in that it provides an easier means of incorporating a filesystem to user space. The Linux kernel has hooks for all types of interesting userspace filesystems. I have seen implementations of cramfs designed for image graphics and word processors.

What is perhaps most interesting is that the author, Amit Singh has created some slick implementations for OS X desktop. Moreover, he makes mention of a book, that was written in part to dispel some myths about Apple most recent operating system. I must admit that I have always understood Mac OS X to be a BSD based OS. Singh notes that it is actually comprised of two layers, Mach(kernel) and BSD (user space).

I remember reading in the "Just For Fun" book that Steve Jobs suggested that Linus Torvalds, " Since there were only two players M$ and Apple, get into bed with Apple and try to get open source people interested in Mac OS X.. " Something to that affect. Linus dismissed Darwin as a piece of crap because it was based on Mach, microkernel architecture developed at Carnegie Mellon.

Interestingly, Darwin development has withered on the vine. Apple has pretty much abandoned it and the community never really took it seriously. In fact, some people tried to fork the project, and that too has died. Judging from the activity at MacPorts, Apple really holds on tightly to its crown jewels or Jobs's incredible art.. iPhone ring a bell ;)

Perhaps Torvalds had a premonition?

Singh, made the point to dispel the myth that Mach is a microkernel. I would love to see the flame wars that he has probably endured over the years.

He also does not consider OS X to be similar to proprietary UNIX system. I would tend to agree. Nonetheless, I would agree that OS X is certainly more Unix-like than M$ Windows ;)
In fact, it seems that both Windows and Mac OS X have borrowed a fair amount of Unix _glue_ in recent years. It is apparent that OS X owes a debt to Unix.

For instance, the Windows TCP/ IP stack was radically changed to resemble the BSD model for the W2K and WinXP. We know that the earlier TCP/IP stacks in Win 3.11and Win98 were horrible.

I would imagine that the network layer in earlier Macintosh machines were also radically different than the current Mac OS X product. Why is this possible? In one word.. FOSS (Free and Open Source Software). At first glance, the ecosystem works quite well does it not? It just seems that Apple and M$ take far more than they give back to the community. My observation could be totally wrong, but I venture to guess that I'm pretty close. I suppose that the sharing is curtailed greatly, due to the existence of the BSD License.

Just a guess.. It would appear that TCP/IP stack integration(M$), and Darwin (Mac OS X) were made possible with the use of the BSD License. It would be great if everyone shared equally.
I suppose that would be wishful thinking..


Slackbuilds Amour

| 2 Comments

High praises to all people working on Slackbuilds project. Nothing short of brilliance. I have been able to find even the most obscure packages(kchmviewer), which easily compiles on my slack box. Even the recently unsupported, but very popular Pidgin application can be found here. Thx to rworkman et al.

sshfs / FUSE and gmailfs

Well, I recently setup FUSE and the sshfs. If you're unfamiliar. FUSE is a unique userspace filesystem which provides some interesting flexibility through the goodness of Open SSH.

Of course, you'll need to install FUSE and sshfs on your system to utilize these powerful tools.

I have always used SSH to access remote shell accounts and also to tunnel VNC traffic. Once you've setup secure key pairs, you'll not need to provide a username and password.
Hence, it then becomes trivial to run a script to automate a repetitive tasks.

sshfs allows you to securely mount a remote filesystem to your local machine. Essentially, you are able to mount the remote filesystem on your local box. You can then read/write to it quite effortlessly.
You simply setup a mount point for that remote filesystem, and run:

sshfs username@remotehost:/home/username /local mountpoint/
If you need to run special options 'sshfs -h' will provide you with the appropriate syntax.

Though I know I'm quite late, I finally decided to do something with all of the unused space on my gmail account. It seems that people have written python scripts to interact with gmail over the wire. Now that you've setup FUSE, you can make use of all of that extra filespace. Not sure if you can serve up data on gmailfs, as it could be frowned upon. Well, I suppose you won't know until you try :)

Importance of log rotation and maintenance

Yesterday, I was faced with the a perplexing error, "Page not viewable, check proxy refusing.. " or something to that effect. Well, I happen to use Privoxy, the client side web proxy. I also use squid a server side web proxy on my smoothwall firewall/gateway.
Privoxy is great because it zaps ads of all flavors (ie flash, image, js, etc). Privoxy was once managed by the same organization as the Junkbuster web proxy. The squid proxy helps me cache images, in an effort to improve my browsing experience.

Once I began to get these error msgs, I figured that there was a problem with the client-side software. I simply upgraded to latest stable release of Privoxy *3.06* I had been running v3.0.3 for at least 3yrs, and was very pleased (until I was unable to properly load sites).

However, this did not mitigate the problem. For whatever reason, I thought perhaps it was a connectivity issue. So I cycled the cable modem. No improvement. I could ping out to the internet without issue. So, I immediately began to suspect my smoothie.

After logging into the box, I notice this :

[root@goon root]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/harddisk4 942M 552M 342M 62% /
/dev/harddisk1 5.7M 4.2M 1.3M 76% /boot
/dev/harddisk3 234M 235M 0 100% /var/log

Hmmm. No wonder the squid daemon shutdown.
Heh, /var partition was consumed by the logs. A deeper look @ /var/log/squid revealed:

[root@goon squid]# du -h -s * | more
29M access.log
26M access.log.1
55M access.log.3
23M access.log.5

I began to wonder why the logs weren't being purged appropriately. I'm _sure_ that I had setup a cronjob to rotate and touch the files as appropriate.

As stated earlier, the squid proxy server simply caches all the images of the websites that I frequently visit. Simple problem and an even easier fix.
After reviewing the access.logs to make sure nothing was afoul. I proceeded to blast away the files to reclaim space on /var partition.
As root executed ' /usr/local/bin/restartsquid' ,checked to see if squid process restarted.

[root@goon log]# ps aux | grep squid
root 10376 0.0 0.6 3604 200 ? S Apr23 0:00 /usr/local/squid/
squid 10379 0.1 61.8 27028 18820 ? S Apr23 7:16 (squid) -D
squid 10381 0.0 0.1 1284 56 ? S Apr23 0:00 (unlinkd)
squid 10382 0.0 0.5 1976 164 ? S Apr23 0:27 diskd 10628096 10

All looks well. I also took advantage of this opportunity to run a squid log analyzer via the smoothie web interface. My firewall box simply runs and stays out of the way. I'd forgotten to grab the uptime info before I rebooted. It's pretty easy to forget about it, that is until I am unable to resolve pages as I expect ;)

Penguicon 5.0

Attended my 3rd Penguicon convention. The first night of the convention was lackluster at best. I checked out two talks. Introduction to Python and Moodle, Free Software Web Portal For On-Line Education.

The first presenter was unable to get the projector working, so the audience was asked to huddle around her notebook computer. Luckily I underwent Lasik surgery and my vision is at least 10-20. Needless to say, it still was difficult to appreciate the Python talk without really viewing the code. The presenter promised to post her talk on her blog. I'll have to take a peek later.

The Moodle talk was decent, and provided by the LinuxBox folks. Ironically they used the same room as the Python talk and were able to get the projector working. The problem these guys experienced was a weak Wi-Fi signal. No problem, I later got a copy of the Moodle talk from one of the presenters.

I do not fault the presenters as much as the hotel facilities. It seems that the hotel gofers were either not on duty or overwhelmed by other problems at the time of these talks. Perhaps the hotel just assumed that a group of geeks could simply fix their own problems.

Nonetheless, I'll be taking another trip to the Troy Hilton to take in a few more talks. I especially want to take in some of the Nanotech and TrixBox conversation.

GParted 0.3.4 - Reviewed

| 2 Comments

Ok, after a few failed attempts of completing this entry... Here it goes.
A bit of background is required to appreciate the problem which facilitated the need to use G-Parted.
I am attempting to squeeze every bit of life from my aging workstation (modded Dual Athlon MP 2.4GHz).
Eventually, I will pull the trigger on an 64Bit AMD Athlon 4400+ 2.3GHz Brisbane Core. Until then, I'm limping along with older system.

When I initially setup my system, I allotted a mere 437MB to system swap (/dev/hda2) and roughly 7MB to /boot (/dev/hda1). The problem with this setup was that the Linux kernel images have really become much larger over the past few years. The reason for the large growth is that most distros (Slackware included) insist that a vanilla kernel contain modules (read: drivers) for every piece of hardware under the sun. So, when you boot your system hardware is recognized immediately, regardless if it is an old AHA-2940 SCSI adapter or very recent SATA drive.

In general, I typically experiment with at least two kernel images, so a 7MB seems impractical. In fact, one image consumed 99% of the /boot partition. So, what must be done?

Puppy 2.12

| No TrackBacks

First impressions of Puppy Linux Distribution.. Recently had the opportunity to install this minimalist distro on a IBM Thinkpad 260 (Pentium MMX w/32MB RAM). Certainly a very low power machine. The system had been running Win98, but had no working USB stack. The machine was only used to connect to 'telnet' to a set of CISCO routers via RS232 port. Yeah, I set telnet. Imagine that.. Luckily these routers never see the public internet. I have already warned the owner of the dangers of using telnet. He was using Hyperterminal to telnet to routers. Yuck.

I offered to extend the usefulness of the machine by installing Linux on the device. I had heard that Puppy and DSL (Damn Small Linux) were superb in this sort of scenario. To be clear, I'd never installed Puppy and was not very familiar with its nuances.

So, I spent a fair amount of time reading and perusing the news groups and project site. When I saw the Live-CD option, I immediately want to test it out. It immediately became clear that I could not run the distro out of ramdisk.. The wiki stated a minimal of 128MB RAM was required, gosh were they correct. The system ran very poorly. I couldn't bear to watch it. So I was then told that I could wipe out the FAT32 partition and run Puppy natively, I decided to install it to the hard disk.

Before I got to this stage, I spent a considerable amount of time trying to figure out how I would get the kernel onto this machine. It didn't have any working network card (PCMCIA) was an Intel Pro/Bus 100 which had no drivers. I figured that I could find the windows drivers on Intel's site, but the problem would be transferring the 5MB install program to the machine. What about USB you ask?? Well, yes the machine did have one USB slot, but as I stated earlier, I was working with Win98 which 'never' had a working USB stack.

Fifteen years ago this would not have been an issue, as the Linux kernel would fit onto one floppy. In fact, my first Slackware install was done with series of floppy disks. I digress. Alas, there was a CD on this Thinkpad, but it took me much too long to find it.

The hard was a paultry 5GB, I immediately allocated ~350MB for swap, another 30MB was used for /boot (/dev/hda1) and the remainder for / (/dev/hda2). I ran the install and eureka, I noticed the difference immediately. I was actually able to get a desktop setup and run the setup (wizard)
and installed the 'eepro100' module for the network card. I then installed 'vmlinuz' (the kernel), hard disk.

A few unfinished items.. I never got a chance to setup the boot loader, but I did create a boot floppy. I did not setup a non-privileged user account. I also have to explain how SSH works, so that he can use SSH instead of telnet. AFAIK, telnet is deprecated and was removed from the kernel many years ago.

Nonetheless, he now has a working machine albeit too slow for my taste. He'll probably need to add more RAM. I couldn't imagine running X on a system with 32MB RAM. He seemed pleased to learn something new..

More updates later.

'Maildir' annoyances

| No TrackBacks

Aargh, I've have spent entirely too much time cleaning up after some faceless and nameless admins. For whatever reason, there was a decision to install courierimap, which by default creates 'maildir' style mailbox folders. The previous imap server ran fine for me. I actually liked the 'mbox' style folders. It was very easy to locate the explicit path to your inbox. Once you understand the location of the inbox, writing procmail recipies are a breeze.

Now that the mbox format has been replaced in favor of the allegedly improved 'maildir' (there is supposed to be some speed improvement for very large files), which has the cur/, new/, and tmp/ subdirectories. All of the mail is spread all over the place. What a mess.

It seems that the average user on the virtual domain which I lease do not use procmail, so the admins don't really seem to care that they screw everything up each time there is an attempt to upgrade imap servers.

After spending a couple hrs reading, I modified my .procmailrc and added '.' and '/' to each of my folders. It seems that maildir style boxes adds a '.' to the beginning and a '/' to the end of your folder. So your procmail script must also contain these attributes. The $DEFAULT variable must also look something like this - $DEFAULT=$HOME/Maildir/ The trailing '/' is significant, as it explicity tells procmail that you are using 'maildir' style mailboxes and not the ubiquitous 'mbox' format.

Yes, I sort everything which touches my inbox, so it really isn't cool for procmail not to work as desired. More on this debacle later.

Importance of Loopback Device

| No TrackBacks

As is typical, I found myself helping out a friend with their Debian box. For whatever, reason (which would become apparent later) he was not able to access the server from his network. I was clear that Samba was running and that once the windows clients were authenticated to the network using the same username and passwd... Accessing the Linux server resources would be trivial.

After perusing /var/log/samba, I discovered the following error:
open_oplock_ipc: Failed to get local UDP socket for address

Hmmm. The failure didn't immediately jump out, so off to trusty google.
It seems that there is use for the loopback device after all. Actually, I had not given it much thought. I always thought of the loopback device as a 'virtual' device, that simply is required to handle localhost. It seems that samba cannot function without enabling the network loopback device.

A simple 'ifconfig -a' revealed that loopback wasn't enabled. Easy enough to fix.
Running as root '/sbin/ifconfig lo 127.0.0.1 up' resolved that matter.
However, the next step was to make sure that the device remained active.

So I next modified /etc/init.d/networking', and added 'ifup lo eth0' and assigned loopback device to first (only) ethernet device.

Next, I made a few modifications to /etc/smb.conf
Made sure that I synced passwd b/t windows and linux.

Once I completed the modifications, I ran 'testparm' and 'smbclient -L hostname' to dump the current samba configuration and active shares to stdout.

The output was as expected, hence problem resolved.

Bleeding continues..

| No TrackBacks

It seems that winds of change continue to sweep the Redmond.
Yeah, I know this is old news but this announcement came just a short time before Gates stepped down. Could there eventually be peace between the pundits of Cathedral and Bazaar software models? As the article suggests, most of the stuff contained in the "Get the facts" campaign was FUD and sought to smear Open Source software. It is absolutely no surprise that the there was a great deal of backlash levied against the orchestrator of this effort.

Unfortunately, industry analysts did not say very much about the truth about these so called facts.
James McGovern where ya at ;)

Anti-Linux leader leaves Microsoft | InfoWorld| By Elizabeth Montalbano

Sourceforge is a testament to the multitude of projects that either fork or eventually get ethered without serving much useful purpose.. Nonetheless I would argue that there are a number of SF hosted projects that are thriving and quite useful.

Disclaimer: Both Kimsal and Elder are my buddies, so I bring forth no axe or wedge ;)
Hmm, I wonder what these guys think about the Summer of Code? I'm clear that companies like Google, would encourage the various project owners (principals) Apache, Moodle, OpenSolaris, OLPC, Mozilla, and others to build to definite specifications. I couldn't imagine that it would be otherwise.

I suppose it is difficult to fathom the shear amount of work that these projects are delivering to the community. Even those very close to the activity can easily become befuddled.

Michael Kimsal’s weblog » Critical thinking on state of open source software

Vista - So what?

| No TrackBacks

Interestingly it still seems that people are expecting huge returns from Vista.. Sure XP is more stable than the abomination that was ME or 9x, but it is still plagued by trojans, adware and virii. It is very likely that by the time Longhorn err I mean Vista is launched 1st Qtr '07, there will be yet another service pack for XP. Wasn't there six service packs for NT?
Perhaps the funniest aspect of this discussion is the thought that Open Source and Web2.0 applications must take advantage of the tardiness of the upcoming M$ release. I think not.
Here is why.. People seem to think marketing campaigns are indicative tremendous activity and intelligent design. Absolutely a farce. The Redmond wooly mammoth and Apple have always used slick campaigns to paint a picture, and the results have not always been favorable.

In Search of Dtrace

| No TrackBacks

I was fairly convinced that a defunct or zombie netstat process had been creating subtle disk activity. Because I use a couple P2P applications, I began to get worried about being 0wned. When the problem first occured I ran 'ethereal' and 'tcpdump' to make sure that no data was being passed across the wire. Nothing popped out at me. I then ran 'vmstat' and 'lsof' to get an idea of any rogue process might be writing out to files. Again nothing significant. At this point, I'm fairly confident that no foul play is afoot. Nonetheless, I'm still clueless as to what is causing my issue.

Ahh. If I were running dtrace on my Linux box, I'm certain that I could discover the problem. I do hope that someone ports this very slick app to Linux. All Sun Solaris users have the benefit of using this powerful utility. Hmm. I wonder if Solaris x86 comes with this tool ?

It appears that the pkg is quite extensible and capable of doing exactly what I need.

Some immediate needs:
All I want is to group the processes that are writing to /dev/hdaX and get a general idea of how long that processes have been active. I would also like to know the memory usage too.

Running 'vmstat -2' gives me an idea of what the threaded processes are running. Nonetheless, the report isn't nearly as clean as dtrace.

Anyone have any ideas??

Penguicon 2006

| No TrackBacks

Alas, I've returned to the annual Penguicon, local reunion of geeks and Sci-Fi fanatics. I spoke about this conference last year. While it seems to get smaller each year, the usual suspects seem to come back each year (ie ESR). Nonetheless, I attended a some pretty decent workshops.

Got a chance to check out a security conversation presented by Tatsuya Murase, and I was definitely surprised by the content. I was expecting a more nuts and bolts, how-to tools discussion. It was more of a strategy and policy discussion. Certainly was not a waste of time. I also caught a portion of the SSL discussion (Bill Childers), as I had to take a break from the sweat box that was called a conference room. The Holiday Inn probably doesn't make much on the attendees for this conference, so the accomodations are pretty sparce. Lastly, I got a chance to get some good ideas for an upcoming improvements to my home LAN backup or archiving strategy. Childers did another talk on BackupPC. He offered some helpful suggestions for my home archiving needs.

I missed some of other interesting talks(ie PHP Security Flavio daCosta), and but I added the presentation to my del.icio.us links for posterity.

Nonetheless, I'm rejeuventatied and humbled each time I attend one of these talks because I always learn that there is much that I still must learn. Maybe I'll get to the LWCE next year.

AmaroK v1.4 beta -Review

| No TrackBacks

I am an avid music enthusiast, as such I probably have over 5000 titles in my collection. Lately, I've been on a digitizing binge. Actually, I'm afraid that my cherished mixed-tapes will eventually become corrupted due to old age and the rigors of humidity. Previously, I used XMMS, and found it to be very capable of music playback. In fact, with the appropriate plug-ins, it can play any media file you throw at it. It even has plug-in extentions for mplayer, so you can view windows media files, mpeg4, .mov, xvid, DiVX, and AAC/MP4.

Understanding the OSS model

| No TrackBacks

Came across an interesting article in the Economist. I'm always amazed at how pundits attempt to dissect the F/OSS model. The problem is that everyone tries to monetize and create metrics behind a culture that is very different than contemporary approaches to building software or other products.

Yes, there are other industries that have chosen to adopt this model. Whether they will be successful remains to be seen; however, it is clear that this model has been sucessful in the software industry (ie Apache, Linux, Embedded software like the TiVo etc.). The problem I have is that venture capitalists and entrepreneurs are in a race for the next great means of making loot from OSS. It simply isn't a silver bullet. Its methodologies or principles cannot be used for everything under the sun. Most of these people have never read the fundamental books that describe this space ( ie Cathedral and the Bazzar, Just for Fun, etc). An even larger amount have never used OSS, and do not understand how the community works.

Nonetheless, the article does have some interesting points.

Open-source business | Open, but not as usual | Economist.com

Strip Mining at the FOSS quarry

| No TrackBacks

TiVo is a very good example of a company which has taken advantage of Open Source software for its financial gain, without returning some of that value back to the community. Apple's Safari browser also shares the same codebase of the KDE browser Konqueror. Sure some would argue that the Safari code base is so very different that it would essentially be a fork of the original KDE project. However, it has been well documented that Apple used Konqueror as reference in building it fairly popular Safari browser.
Now I understand that Apple has submitted software patents which will likely raise the ire of the KDE developers.

Taking this point a step further, there is also a 'dumping' phenomenon which is described quite nicely by JBoss exec. The idea of returning crap or code that really isn't useful back to the community after a company has reaped high gain. The idea of GPL is to share any and all code that has been modified. GPL V3, will help mitigate this dumping. However, companies will always find other licenses(ie BSD) that give them an easy means of circumventing the original goodness of sharing code. Not very nice at all.


Enter The JBoss Matrix

Open Source Financial Software

| 2 Comments | No TrackBacks

Continuing the the 'Linux on the Desktop' discussion, I often heard people ask about a heavy duty, industry standard financial software. Frankly, I use GnuCash. Granted it's not really designed for a large business (ie accounting firm), but it suits me just fine. In fact, the package has made reliance upon Quicken a distant memory.

*Aside* - GnuCash has finally released a beta product based on Gtk2 libs, now that's newsworthy ;)

Now there is talk about another commercial product, TurboCash opening their source code. While I think it's great, I'm not sure if it's exactly newsworthy.

There are always firms who secretly desire to get access to more open source developers, so announcements like these garners the attention that they need.

Eventually, someone will probably reverse engineer their product anyway.


freshmeat.net: Category Reviews - Financial Software for Linux

Rude Linuxheads - Say it ain't so

| No TrackBacks

Continuing with the community theme. Interesting article that has some truth to it.
Moreover, I can sympathize with those who have been turned off from seeking help from Linux newsgroups or IRC.

Although, I have used Linux for awhile, I too have experienced atypical behavior from a couple developers. I seem to recall an instance where I sought help compiling Evolution from source, and quickly plunged into dependency hell. Obviously, I requested help from the Evo-hackers NG. There was bit of arrogance and a basic disdain for the less informed user. I will add that this was not generally the attitude of the majority of the GNOME/Evolution developers.

I suppose the beautiful aspect of Open Source is that for every idiot, there will be at least three benevolent people. So, it was that I discovered FRG project, whose scripts made installing GNOME apps on a Slackware box mere child's play.

In truth, the whole idea of asking the smart question is really essential to getting useful help. If you take the time to document and understand what help is truly required, you're more likely to get the help you need.

Regarding the Self-Congratulatory Posture of the Cluetrain Manifesto ... :: AO

World Dominance

| No TrackBacks

linuxdesktop.jpg

Could this be the year of the Linux Desktop? I've heard this question repeatedly over last couple of years. At this point, it is clear that the process of overtaking M$ will be a lengthy one. However, I'm not so sure that the effort is actually necessary. Linux has already displaced UNIX and NT in the server room. It is the largest and arguably the most successful Open Source project. Apache webserver being the other. Despite all of the hoopla over the GPLv3, I really don't think it will slow Linux adoption or discourage commercial vendors from attempting to make a mint off Open Source software.

GNOME apps and Slackware

| No TrackBacks

Most of you know that I'm a fervent Slackware supporter. Despite the bad press that is unfairly heaped upon this very senior and robust Linux distribution, I have continued to use it exclusively to power my desktop since 1996. In fact, I credit Slackware with helping me understand the very powerful UNIX platform. It is well known that the distro is managed by one developer, Pat Volkerding, who recently and probably correctly decided not support the GNOME libs. One of the best aspects of Slackware is its stability and the philosophy of not incorporating software packages that are considered exotic or unstable. Remember, that the mission of Slackware is to provide a very stable and secure distribution that is easily configured and just works.

Mtools and Parted save the day

| No TrackBacks

I've been struggling with a weird computer problem, so I've not had the time to post anything lately. Besides the other distractions (Pinstripes || Tang Soo Do). Suffice to say, that I have it under control now. I think it might be instructive to discuss it here, as some of you might find it helpful. Additionally, there are still pieces of the problem which have me baffled. That is I'm still having issues with my dual CPU setup. I've had this box for since 2003, and the dual CPU setup worked fine using the earlier 2.6.5 kernel. Once I decided to add another 512MB DIMM (unbuffered), and upgrade the kernel to 2.6.11, all hell broke loose. I've posted my question to LKML, but to no avail. It really seems that I have a hardware issue and not a kernel bug.

Basically, if I enable the MP table within the BIOS, I get all of these weird IRQ vector (AB, AC,AD) trapping errors. I've tried passing 'noapic' 'noirqdebug' and of host of other workarounds to the kernel at boot time, but none of them have worked. I even removed the additional DIMM and I still get these problems. I also ran 'memtest86' and the test did not reveal any problems that would indicate bad memory. So for now, I'm stuck with a crippled uniprocessor box.


Handheld Server

| No TrackBacks

Innovation of Open Source vendors never ceases to amaze. I find it very interesting that 20 yrs ago people asked, "How do you make money from free (as in beer) software?"

Now it appears that there a literally thousands of ways to make money from Linux and other Open Source projects.

Granted that anyone who has been working w/Linux for any substantial length of time could build their own server, I do think the product is pretty slick anyway.

RED HERRING | Linux Server Fits in Pocket

Penguicon 3.0

| No TrackBacks

Experienced my first Linux/Sci-Fi trade show in Detroit-Metro area. Penguiconwas very different from LWCE and ALE conferences that I have witnessed in years past. The obvious difference is that there were no vendors. I also surmised that there weren't that many people travelling in from beyond the 100 mile radius. Apparently these differences make for a smaller audience, and perhaps a more enthusiast vibe. Moreover, there were a huge number of Sci-Fi fans, in fact, I believe that they significantly outnumbered the computer nerds.

I met Joe P, one of the GNOME/Beagle developers. He assured me that Beagle was superior to Google's desktop indexing tool.
Although, I have never really enjoyed compiling GNOME packages on a Slackware box, but I'll try my hand at compiling and installing Beagle. It really appears to be a very cool project.

Ode to a mail server

| No TrackBacks

Well, not exactly. I won't be writing any rhythmic prose in honor of an inanimate object.
However, I did want to share a recent experience I had in building my first mail server.
A client was unhappy with a Win98 machine whose System Registry had been overtaken with trojan adware. The machine was a 2-year old Dell GX-150 desktop, which had 128mb RAM and 20GB hard drive. Basically, it only ran Office productivity software and a web browser for some of his office staff.

Hmmm. I thought, how could I increase the utility of his current hardware ?
Why yes, I knew the answer was obvious. Once again, I put on my Linux evangelist hat, and unfurled the Open Source banner. I offered to extend the useful life of his existing hardware and provide such useful services as a print server, fileserving, disk duplication and mail server.

Call For Help

| No TrackBacks

As I have stated in the past, I was first introduced to the internet in 1992. In fact, it was during my undergrad days at FAMU. We had old token-ring networks and used IBM's PROFs to send email to the rest of the world.

My computing affliction took root (pun intended) when I was introduced to UNIX in 1994. While at Texas A&M, discovered that UNIX was an industrial strength OS that spawned the internet and later the World Wide Web.

Specifically, I was interested in developing webpages, so that I could tell my story to the world. In most cases, I used Sparc Solaris pizza boxes and vi editor to write mostly primitive webpages. Ultimately, I had to learn UNIX to manifest the webpages.

So I wondered, how could I do this cool stuff at home? In 1996, someone mentioned that I could run an OS called Linux. In those days, it was best known as a clone for the x86 platform.


The Open Source Paradigm Shift

| No TrackBacks

I came across this very well written article concerning the paridigm that is Open Source. In keeping with my mission of bringing to light the very important concept of balancing the software landscape, and empowering the consumer, I figured I'd share this one.

tim.oreilly.com -- Various Things I've Written: Tim's Archive

Linux In The Enterprise

| 2 Comments | No TrackBacks

In an effort to recoup the many weeks of inactiivity, allow me share a few items with you.
I had the pleasure of doing another Linux talk.
This time I spoke to a group of Computer Science headz at the Detroit -Black Data Processors Association.

It marked the first time I actually spoke to non-Linux geeks like me. It was quite refreshing and somewhat challenging to extoll the virtues of Linux upon a large contingency of M$ users. Nonetheless, the talk was quite successful.
You can peep the presentation here.

Project Heresy - Final Chapter

| No TrackBacks

Last update on the Dell Optiplex saga. The USB and sound card are fully functional. Actually, it was rather trivial getting them working correctly. I still have not updated the Evolution and Mozilla packages to their latest versions.

Nonetheless, the user has been quite productive and is able to use the huge assortment of Linux software without relying on any Microsoft products.

I plan to setup desktop sharing via the KDE desktop, so that I can assist her without having to drive 20 minutes to their home.

All that remains is teaching her some of the nuances of the applications and desktop tools.

Had lots of fun rebuilding her box. Learned a great deal in the process too.

Project Heresy - Revisited

| No TrackBacks

Well, it's time for an update to the saga of the dreaded Dell Optiplex GX-1. You may recall that I embarked on a mission to convert a virus riddled M$ Win98 machine to a shiny new Linux box. Initially, I ran into a some roadblocks, in that the PC BIOS was archaic at best. It simply refused to boot from CD. So, I abandoned the installation of RedHat Fedora.

Operation: Project Heresy

| 5 Comments | No TrackBacks

Some of you know that I've been an Open Source/Linux advocate since 1996. I rarely waste an opportunity to unfurl the Linux banner, and wave it proudly in the faces of those who are afraid to detach themselves from M$ umbilical cord. In fact, I no longer use M$, err I mean Microsoft products at home. One of my friends recently learned that their outgoing mail (SMTP) service was interrupted by their ISP, b/c their Outlook inbox was infested with the Netsky/worm.

So unknowingly, they had become a spam relay, and their ISP shut them down without notification. Recognizing their ISP's concern for their other customers, I understand these actions, albeit pretty nasty.

World Domination - LWCE

Wow, it's been nearly 6wks since I posted anything to this blog. I suppose a severe scolding is in order. Actually, I have a very good excuse. I migrated my blog from the cheasy Blogger to the very cool and more secure Moveable Type Publishing Engine. I still have some cleaning up to do here.

Styles sheets seem to require more effort. Additionally, my old comments didn't import properly. I expect to have it fixed in short order.

I author this scribe from the floor of the Linux World Conference and Expo(LWCE), at the Jacob Javits in NYC.

This is my 3rd Linux conference and 2nd LWCE. I attended the 2001 LWCE and had a great time.
As soon as I walked into the conference hall I ran into a Linux noteable. I met John 'Maddog' Hall. I chatted with him and former Info World columnist Russ Pavelic. I'll post some pics at a later date.

Another interesting event is that M$ has crashed the party. They have a very large booth here. How dare them. I suspect that they are attempting to grab developers from the Open Source community.

Anyway, I'll share more later.

Monthly Archives

Pages

OpenID accepted here Learn more about OpenID
Powered by Movable Type 4.25

About this Archive

This page is an archive of recent entries in the Linux category.

General is the previous category.

Martial Arts is the next category.

Find recent content on the main index or look in the archives to find all content.