2015-02-20

Non-global learnyounode without much typing

For whatever reason, when you dive into node.js you come across lots of code that tells you to install command-line javascript programs "globally" into /usr/local. Lots of examples say to do this using the sudo command, eg `sudo npm install -g learnyounode` and others say they get messed up doing that so they suggest just changing the ownership of /usr/local to be you.... I get the feeling that most node.js creators and users are working off of Macbooks or something and have a very single-user view of their computer and perhaps play a little loose with security.

This was a big hurdle for me to get over when I first started playing with Node.js on my Ubuntu desktops and Debian servers. It was like the Ruby version thing all over again. Times a thousand. I didn't really want to commit to one-off programs like learnyounode to be stuck in my /usr/local forever. I thought the thing to do was to use node's prefix option but even then I wasn't sure I wanted the prefix/bin files to be in my $PATH all the time.

Fortunately once I learned a bit and got the search terms right I found that others were also trying to solve this dilemma. One of the solutions I liked was using npm run to run scripts in node_modules/.bin. It let me use those binaries locally when I was in the package's folder without committing to them any other time. This appeals to me more than any of the $PATH modifying ones. So, to use nodeschool.io's javascripting or learnyounode interactive modules it was as simple as this:

mkdir -p node/learn
cd node/learn
npm install javascripting
npm install learnyounode
edit package.json
>>> in packages.json
...
"scripts": {
    "test": "echo \"Error: no test specified\" && exit 1",
    "learnyounode": "learnyounode"
},
...
$ npm run learnyounode
I found it tedious after a while to type such a long command. Especially when adding program arguments. So I used a simple alias for that shell instance:

$ alias learnyounode='npm run learnyounode'
$ alias lyn='npm run learnyounode'
You can do one or the other or whatever you like. I decided that even learnyounode was annoying to keep typing so I used lyn.

I really like this solution for working with these interactive programs. I will see what challenges arise as I get more advanced in my node.js and npm usage. I can foresee wanting a "user local" install but still wanting to slip in and out of it. Maybe using a chroot or something.

One thing this method doesn't do is let these package installed binaries like learnyounode and javascripting work from any directory so their directions of "make and change into a new directory" don't work. Instead the "learn" directory with node_modules is where I create all my practice programs.

2015-02-17

LXC multiple personality disorder

I have a couple of server systems running as Linux Containers (LCX) as a test since Debian 6.0 (squeeze). The host system has been upgraded to Debian 7.0 (wheezy) lxc version: 0.8.0-rc1 and things generally work fine but I rebooted the other day and the containers fell to pieces.

After manually stopping the containers (or so I thought) and starting them up again one of the containers was fine. The other, not so much. Connecting to it remotely with the PuTTY ssh client would fail either immediately with "Network error: Connection reset by peer" or it would work for some seemingly random amount of time, a second to minutes, before another error would appear "Network error: Software caused connection abort"

Scouring the web I found lots of suggestions saying it was missing config files or keys in the instance's /etc/ssh/ directory, but I knew this was not the case. The files were there and the connection worked, sometimes. Plus I did some tests running netcat (nc) as a client and a server and those connections also failed either after a while or sometimes right away. Sometimes when connecting to the server instance I had just started I would be told the connection was refused.

I started to believe that I had another server running in my network that claimed the same IP address and server name on login. This belief moved to some kind of server multiple personality disorder when I saw that my tmux session sometimes existed and sometimes didn't on login even though the file I had created in the "tmux exists" connection was there in the "no tmux" connection.

I popped onto the lxc irc channel on freenode for some advice. A fellow user, wam, ran me through some tests. I wasn't running out of memory. My configuration was very similar to the working container. No firewalls were blocking stuff on this internal private network. He suggested that I use tshark to track down the possible RSET, so I went (t)shark fishing:

  4.999284 3com_c0:25:71 -> 46:4d:07:7e:87:9c ARP 60 Who has 192.168.1.33?  Tell 192.168.1.4
  4.999327 46:4d:07:7e:87:9c -> 3com_c0:25:71 ARP 42 192.168.1.33 is at 46:4d:07:7e:87:9c
  5.007975 fa:11:43:eb:f6:eb -> 3com_c0:25:71 ARP 42 Who has 192.168.1.4?  Tell 192.168.1.33 (duplicate use of 192.168.1.33 detected!)
  5.008086 3com_c0:25:71 -> fa:11:43:eb:f6:eb ARP 60 192.168.1.4 is at 00:50:da:c0:25:71 (duplicate use of 192.168.1.33 detected!)

Duplicate use of 192.168.1.33 detected with different mac addresses? I thought I had just ruled out multiple servers.

DeHackEd on the LXC IRC channel, #lxcontaines, suggested checking brctl showmacs which I filtered further using other information he shared:

brctl showmacs br0 | grep -v '  1'
port no mac addr                is local?       ageing timer
  4     46:4d:07:7e:87:9c       no                70.08
  2     fa:11:43:eb:f6:eb       no                22.85
  2     fe:68:85:a8:dc:6e       yes                0.00
  3     fe:b0:fc:d7:2e:9f       yes                0.00

  4     fe:dd:ac:a6:ac:f4       yes                0.00

Both of the systems claiming 192.168.1.33 are running on the LXC host. Odd. Using lxc-ls and lxc-list shows only the working container and the broken one. Not three. Another person, devonblzx suggested that I just specify the hwaddr in the lxc config file. I, in fact, had done this once upon a time and I had long since commented it out. I don't remember why. Maybe it wasn't the unique lxc.network.hwaddr but the non-unique lxc.network.name that was tripping me up. After what DeHackEd and devonblzx pointed out in /sys/class/net/$bridgename/brif/ and /sys/devices/virtual/net/$bridgename/ I bet it was the .name value. I should try it again. The question that nagged me was, how had I launched duplicate instances and would setting hwaddr protect me? DeHackEd said I'm not suppose to be able to launch multiple instances, at least not by the same user, due to control channels that use the names that would conflict. I thought it was maybe a bug in lxc 0.8.0 so I shared my ps output that showed there were indeed three instances with two pointing to the same config file:

root 9287 0.0 0.0 20920 664 ? Ss Feb13 0:00 lxc-start -n lxc -f /etc/lxc/auto/brokencontainer.conf -d
root 18816 0.0 0.0 20920 668 ? Ss Feb13 0:00 lxc-start -d -n brokencontainer

DeHackEd promptly said "no name conflict..." and it took me a minute to spot it. One had been started with the name lxc. I asked why lxc-ls or lxc-list don't show it but no one volunteered that answer so I dove into the start-up process to figure out why it started with -n lxc.

My configuration was from Debian 6.0 worked like this. The /etc/default/lxc file specified both that I wanted it to run from init and listed the CONTAINERS I wanted to start. The init script didn't care anymore about the CONTAINERS variable, instead it looked at /etc/lxc/auto/* and tried deriving the names from them. My /etc/lxc directory looks like this:

/etc/lxc/auto/workingcontainer.conf -> /etc/lxc/workingcontainer.conf
/etc/lxc/auto/brokencontainer.conf -> /etc/lxc/brokencontainer.conf
/etc/lxc/workingcontainer.conf
/etc/lxc/brokencontainer.conf
/etc/lxc/debconf

This is not quite how the README.Debian file suggests things to be:

LXC container can be automatically started on boot. In order to enable this, the LXC init script has to be enabled in /etc/default/lxc and and container that should be automatically started needs its configuration file symlinked (or copied) into the /etc/lxc/auto directory.
Note that the name in /etc/lxc/auto needs to be the container name, e.g.:
  /etc/lxc/auto/www.example.org -> /var/lib/lxc/www.example.org/config
I joined the #debian channel on  the OFTC IRC network to get some advice and figure out if my init.d/lxc script or something was messed up and peter1138 helped straighten me out. He said he had a similar setup when he first upgraded from squeeze but when he created a new container in wheezy he saw the config files were in /var/lib/lxc/containername/config and that the /etc/lxc/auto/containername pointed there.

This made it so that the init script works by extracting containername from the folder holding config. There is nothing in that process that cares what the file in /etc/lxc/auto/* is named, it just better be a symbolic link to a file in a directory who's name is the container name you want. I complained about config files in /var, broken upgrades, and a seemingly misleading emphasis on the auto/ name and how autostart works and was given the bug! challenge.

I think it would be even better if it just read the lxc.utsname from the file as peter1138 suggested, then it could be a symlink or copy to any file without needing some specific directory layout. I said it didn't seem that the name in auto needed to be the container name at all and peter1138 agreed that for autostart to auto start this was true, but if the name was the container name then lxc-list would tag the container in the listing as autostart.

I hope this is helpful to someone else facing similar sounding issues even if that someone else turns out to be a future me.

2015-01-28

IRC SSL Client Certs

ChatZilla supports using SSL connections and auto-identifying with SSL Client Certificates on the OFTC and freenode IRC networks using CAcert WoT User and StartSSL free email verified certificates. You may have trouble using StartSSL verified user certificates. Tested using ChatZilla 0.9.91.1 in Firefox 35.0.1.

2014-05-06

How do I handle fstab mounts under run in Debian Wheezy?

A release goal for Debian 7.0 ("wheezy") was to introduce a new top level directory, /run, and relocate system state information that does not need to persist through a reboot but that may need to be written early or otherwise when the root filesystem is read only. Other distributions are also introducing /run and a proposal has been submitted to include it in the Filesystem Hiearchy Standard (FHS).

This is all fine and well, but it has tripped up automated mounting of /etc/fstab entries under /run (formerly /var/run).

The proposed update to debian-policy says this:
Files and directories residing in '/run' should be stored on a temporary filesystem and not be persistent across a reboot, and hence the presence of files or directories in any of these directories is not guaranteed and 'init.d' scripts must handle this correctly. This will typically amount to creating any required subdirectories dynamically when the 'init.d' script is run, rather than including them in the package and relying on 'dpkg' to create them.
Can I then conclude that /etc/init.d/mountall.sh is not handling /etc/fstab correctly with regard to mounts under /run or that there should be another init.d script to handle the /etc/fstab mounts under /run correctly or did the writers expect that fstab mounts under /run are invalid and all actions under it should be done programmatically by the individual services and generally be fixed-up by their init.d scripts?

2014-04-23

Exploring StartSSL - Automated Registration Email

Reading about the decision to no longer include CACert.org in the Debian ca-certificates package (Debian bug 718434, LWN: Debian and CAcert) I was introduced to StartCom's free certificate offering. As I investigated their site I was both intrigued by the free offering and the Web-of-Trust program idea, and put off by the lack of clear or sometimes conflicting information.

For the impatient, the TL;DR version is this:

  1. Sign up first for a free (class 1) certificate by clicking Sign-up For Free in the top left of the site. Everything else is confusing.
  2. Use an email address that doesn't do grey listing, spam filtering, or anything, and that you have access to the logs on (is this service only for "techies"?)
    1. If you do have grey listing or spam filtering that blocks the web page test so they give you big red text telling you you're all wrong, disable it or at least allow from the names and IP addresses in their SPF record. (yes, I guess this service is only for "techies.")
  3. If the form submits without telling you your mail server is wrong but you don't get an email pretty quick, log out (top right corner icon) and try registering again.

If you'd like to learn more of the details or share my pain, read on:

All paths seemed to lead to getting a certificate so I settled on starting with the StartSSL Free (Class 1) certificate since I wasn't sure exactly what the requirements were to get the StartSSL Verified (Class 2) one. After deciding that "Sign Up" and "Express Lane" are the same thing, and seeing that I must fill out the form as an individual, I entered my personal (gmail) address.

This took me to a page asking for me to check my email right away and copy/paste in the code they sent me. Now Gmail is usually very fast about showing new emails, but nothing was there. Not in Important and unread. Not in Everything else, and not even in the Spam folder. Not several minutes later. The page was very insistent that I do not leave or reload it so in a new tab I started searching for answers.

The first answer I came across can be summarized thus "it must be your problem" with no additional suggestions. I have come to identify this as a common communication style from StartCom:
Important! Experience has shown that the failure of email messages not arriving are always the fault of the receiving end. If the wizard confirms to having sent the message, i.e. no error occurred, than the message has been delivered and accepted by your mail server!
 Surely they've had Gmail users do this process before. So strange that it wouldn't work. After all, I wasn't using one of their blacklisted email providers listed on their enrollment page. I decided to try again from a different browser using my work email address, the one that I manage and have access to the server logs on. This is what I learned.

When you click Continue on the enrollment page your server will get hit from one site. In my case it was [212.117.158.94]. If you have gray listing in place (the work server does) and it sends back an error like 450, the web page immediately tells you it couldn't deliver the email. It does mention that the problem could be grey listing among other things, and basically says it's your fault. So you try to open up your grey listing to allow startcom.org through, but that doesn't seem to be enough because for some reason the client name comes through as unknown. (Edit: I had recently upgraded our mail server and I believe the "unknown" issue was a local configuration issue.)

So you add their IP address and then the web page thinks that all is well and sends you to the "wait for it" code confirmation page, but still no email. Why? Probably because the web page just does a test connection. Right after it sends you to the next page another server, [192.116.242.7] in my case, connects (also with client_name=unknown) and gets Greylisted. So I sit here waiting, hoping for a retry, feeling stuck with no help. Back to searching in a new tab.

The second answer I came across also says "it must be your problem" :(
The program always sends the verification code! Do not blame us, if it does not arrive....we do not have control over your mail server and mail account!
 Third time's the charm? Good thing I have three browsers installed. So I checked the SPF (TXT) record for startcom.org and added all of the names and IP addresses listed into my server's client whitelist for greylisting and tried again from the third browser using the work email address. Success! The email made it to my inbox.

I didn't really want to do the certificate in the third-choice browser, so I went to the second browser and pasted the code there. It failed to verify but the failure message told me something I would have loved to have known long before. I didn't copy the exact message, sorry, but basically it said "if it fails, log out and try to sign in again". A "resend this request" button would have been better, but at least now I know that I don't have to stand like a deer in the headlights on the "wait for it" page when things fail.

Now I just have to wait 6 hours for the account to be reviewed, probably because I tried so many times.

Good luck. I may end up dabbling with CACert, Comodo, or retreating to my own self-signed certificates again.

2014-01-18

FamilySearch Indexing on Ubuntu 13.10 x86_64 via Oracle JRE 7

The hard disk drive on my old used Toshiba Techra A9 suddenly died a death of a thousand bad blocks, so I replaced it with a SSD (snappy!) and re-installed Ubuntu. They seemed to be pushing 64 bit unlike a couple of years ago, so I went with that since it was supported, not because I have oodles of bios hidden RAM.

The next thing to do was install some of the programs I use the most. NetBeans and Eclipse are right up there, so first stop was Java. I like the idea behind OpenJDK and recognize the push to use it, but I've been bitten many times in the past by performance and display, and outright broken issues (looking at you web start) that I go straight to the Oracle JDK. Sorry guys. I used the webupd8.org ppa to install it. Good stuff. One less thing to manage in /opt. Next came NetBeans, Eclipse, and the Arduino IDE, all Java based.

Since I was on a roll with Java based programs, I thought I'd stick FamilySearch Indexing back on. The Eclipse "unzip it where you want it" and the Netbeans and Arduino installers had gone well enough that I didn't expect any trouble, but that's what I got.
bin/unpack200: not found
After a few false starts, I found the best answer for this issue on the LDSTech forums where I was put onto the idea that it was a 32 bit compatibility issue on some 64 bit setups, and the work-around was to install the ia32-libs package. So I tried that:
Package ia32-libs is not available, but is referred to by another package.
This may mean that the package is missing, has been obsoleted, or
is only available from another source
However the following packages replace it:
  lib32z1 lib32ncurses5 lib32bz2-1.0

Reading around I found more confirmation on the ia32-libs package going away. This seemed like a roadblock, except that the error had to do with decompressing. Maybe ia32-libs only included those three packages, so I tried the first, lib32z1, and that error went away, but another appeared.

java.lang.NoClassDefFoundError: java.awt.Container
    at com.install4j.runtime.installer.frontend.headless.AbstractHeadlessScreenExecutor.init(Unknown Source)
    at com.install4j.runtime.installer.frontend.headless.ConsoleScreenExecutor.(Unknown Source)
    at com.install4j.runtime.installer.frontend.headless.InstallerConsoleScreenExecutor.(Unknown Source)
    at com.install4j.runtime.installer.Installer.getScreenExecutor(Unknown Source)
    at com.install4j.runtime.installer.Installer.runInProcess(Unknown Source)
    at com.install4j.runtime.installer.Installer.main(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
    at java.lang.reflect.Method.invoke(Unknown Source)
    at com.exe4j.runtime.LauncherEngine.launch(Unknown Source)
    at com.install4j.runtime.launcher.Launcher.main(Unknown Source)
How could java.awt.Container not be defined. This now sounded like a Java Runtime Environment (JRE) issue, but I know mine is fine. I just installed and tested three IDEs. It wasn't until this point that I noticed it was an install4j based install, so I started including that in my searches.

The problem wasn't unique to FamilySearch Indexing. I found someone trying to troubleshoot it for Visual Paradigm for UML among other things. Their solutions were the now obsolete ia32-libs or making sure their installed JRE was good.

Then I came across a post by Matthew O. Smith talking about Indexing and obsolete ia32-libs. He installed a number of extra libraries, including some i386 ones, and then installed using the headless option to get things going and save on installing a few more libraries. I felt like I had gone far enough with lib32z1 so I decided to try a different route, to try and run the Indexing software with my Oracle Java 7 environment. To do that I first leveraged the headless install tip Matthew gave, and then I modified a copy of the install4j shell script indexing launcher it created in $HOME/.FamilySearchIndexing/indexing.familysearch.org/
./Indexing_unix.sh -J-Djava.awt.headless=true
$ diff indexing indexing-jre7 
4c4
< # INSTALL4J_JAVA_HOME_OVERRIDE=
---
> INSTALL4J_JAVA_HOME_OVERRIDE=/usr/lib/jvm/java-7-oracle
114c114
<     if [ "$ver_minor" -gt "6" ]; then
---
>     if [ "$ver_minor" -gt "7" ]; then
I copied the .desktop entry and tweaked it to point to my modified launcher instead and now I'm in business again.

2013-11-30

Java RESTful Web Services, NetBeans Style

I’ve been interested in exploring Java RESTful Web Services to backend some AngularJS front-ends, with my current focus on JAX-RS implementations.

Blaise Doughan has been blogging a lot about EclipseLink, JAXB, and MOXy. I decided to follow the code example in his post MOXy is the New Default JSON-Binding Provider in GlassFish 4 using NetBeans 7.4 since the Java EE download bundles an install of GlassFish Server Open Source Edition 4.0.

Start by creating a new Java Web Application by choosing New Project from the File menu, going to the Java Web category and selecting Web Application. To keep things the same as his example, name it CustomerResource. Select the GlassFish Server 4.0, Java EE 7 Web, with the suggested context path of /CustomerResource. If you run this right away you should be served the index.html page saying “TODO write content”.

We will work backwards a bit in his blog post, building a little infrastructure before we use it. So first we will right-click Source Packages and add a new Java Class named PhoneNumber in the org.example.model package. Paste or type his code into this class. Do the same for the Customer class. NetBeans will suggest you use the diamond inference and make phoneNumbers final. The code works fine either way.

Still working up in the blog, we will create the CustomerApplication and CustomerService classes in the org.example.service namespace. At this point you should be able to click run and visit the local URL to get our “hello world” type xml response for Jane Doe:

http://localhost:8080/CustomerResource/rest/customers/1

Everything up to this point “just works” in the excellent NetBeans IDE and GlassFish Server, but I was interested in his JSON tweaks, having seen some of the shortcomings he mentions unless I map to a JSON object by hand. To do some testing I first commented out the APPLICATION_XML line from the @Produces list so that I could see (download) the output and move forward to Customizing the JSON-Binding with it’s use of MoxyJsonConfig. This is where I was stumped for a bit.

Pulling in the JAXBContextProperties wasn’t a big deal. The EclipseLink from GlassFish library seemed to have what I was after. Just right-click the Libraries folder in the project and Add Library then choose that library and click Add Library.

To get MoxyJsonConfig, download jersey-media-moxy-2.4.1.jar and stick it someplace handy. I use a folder named Libraries in my NetBeansProjects folder. Then right-click the Libraries folder in the CustomerResource project in NetBeans and click Create in the Add Library dialog. Name it something like Jersey Media Moxy and then in the library classpath Add Jar/Folder to add jersey-media-moxy-2.4.1.jar. Then add this library to your project.

At this point you should have output like Blaise has documented for New Response in his blog post. Enjoy.

If you can’t find jersey-media-moxy-2.4.1.jar or the API has switched around again and a later version is missing the dependency, then read on for my tale of woe and sorrow trying to locate it in the first place. Perhaps it will help.

I haven’t jumped on board with maven yet, so when I came across Blaise’s follow-up question to the StackOverflow question Cannot import EclipseLink MOXy while searching for MoxyJsonConfig where he implied the use of Maven, I was a little disappointed. I was equally disappointed in my next dozen searches all failing to find the jar containing MoxyJsonConfig. I could find API docs, people talking about using it, etc. findjar.com failed. Even mvnrepository.com searches failed. Finally after Google searches of varying portions of the class or package name, one for org.glassfish.jersey.moxy pointed me to jersey-media-moxy within MVN. Unfortunately it was pointing me to 2.0-m07 which has MoxyJsonConfiguration and not MoxyJsonConfig. I didn’t realize that right away and tried implementing using it. It doesn’t work. The current latest version, 2.4.1, has MoxyJsonConfig and does work. I have no idea when things changed or what version Blaise used.

I was glad to finally find the jar, but there has got to be some better way to find a class and know what version of things people are talking about. If there is, please share. If there isn’t, please keep this in mind when sharing code examples.