Saturday, February 01, 2020

HOWTO: Environment Sensors in the Home

[ it has been a while since I had something to write about; this is something I feel can help others ]

We have a large aquarium at home, and last year a leak occurred in one of the pumps in the equipment space below. Water overflow the catch-bucket and flowed into our walls. We ended up with some water damage. Never fun.

I recall a friend of mine had a pipe leak in his condo, while he was out of town. The management company called him about the water flowing out from under his door. He signed into his in-condo camera system and saw a "shimmer" on the floor -- completely covered with water. Checking his NetAtmo device, he saw (belatedly) the jump in humidity within his condo. That humidity jump gave me an idea that I should monitor the humidity in the aquarium equipment space, and alert us when it got "too high".

While I could put together something with the DHT22 sensors that I have laying around, I wanted something more turnkey. This post will describe the overall system that I assembled, using mostly off-the-shelf, and one small service that I wrote. These were installed onto an Ubuntu 16.04 server in my house, and should work on any Unix-ish server (I don't know the portability of the components to Windows).

In short, the major components:


TeHyBug Sensor

I bought the TeHyBug Mini, but Oleg makes a variety of sensors based on what environmental measurements you want to make, and whether your application will be indoor or outdoor. These are small devices powered by battery, or by a standard USB charger. There are a variety of options that you can select, and shipped from Germany to Texas in a single week. A friend has emailed Oleg, and reports that he is quite responsive and helpful.

In "configuration mode", the TeHyBug presents a nice web interface with all the configuration options. It supports several mechanisms to report environment data, and I configured it to send its data via MQTT to my server. I was able to specify the format of the payload, enabling me to keep it tight, and easy to parse. I only have a single sensor, but I can imagine needing to get a bit more general if you load up on a large variety of sensors and applications.

Mosquitto MQTT Broker

This is a simple pub/sub system. Data arrives from the TeHyBug to a particular "topic", and delivered to all connected clients that are subscribed. There isn't much to do here, except install the software.

InfluxDB

This is a high-performance, hugely-scalable time-series database. The website says it can intake millions of samples per second. Yikes. Total overkill for my needs, but it has a simple Python API (see below), and the Grafana frontend knows how to connect to it. Turnkey.

I ended up having to create a username/password because Grafana requires such. Before I started the Grafana setup, I was able to place the MQTT data right into InfluxDB without needing to specify a user. So, just go ahead and create one (I used the Python API to do it; there is a command-line tool for doing stuff like this, but I never installed it).

Bridge Service

This is where I needed to dig in further. Data arrives at the mosquitto MQTT broker, and goes nowhere. There must be a client subscribed to the topic, for the data to move any further. It looks like "Telegraf" may be a package to move samples from MQTT into InfluxDB, but that seemed very complicated.

There were a couple Python examples that I found, to move samples, but I didn't like them. I wanted a configuration file, I wanted it to run as a service, and I needed to handle my specific data and InfluxDB measurement samples/format.

The resulting code for the bridge is located in my OSS repository. It includes a systemd service file, and an example config file. It simply hangs around, waiting for samples to arrive, and shoves them into InfluxDB. Easy peasy.

Grafana

This part was quite a bit more complicated. There is a Grafana package within the Ubuntu 16.04 package repository. That was my mistake. Way too old. I switched to use the grafana.com package repository, and installed the stable version from that. Attaching it to InfluxDB was easy, once I did that.

Then I wrestled with configuring a dashboard, but that's just user education. After reading some documentation, and some Q&A on the web, I was able to get the graph I wanted:

My next step is to hook up an alert on the Humidity, and it looks like Grafana has plenty of options.

---

I hope this helps. The TeHyBug is an affordable, turnkey solution. But you can imagine data from many IoT devices, such as an Arduino, an RPi, a Particle device, and others. Each of these delivering data to the MQTT broker. The rest of the solution would be the same from there, with minor tweaks in the Bridge to deal with parsing the MQTT payloads.

Saturday, November 19, 2016

PSA: Redbud Custom Homes

Most of my recent posts have been technology-related, but for this ... I'm throwing back to something a bit more personal. This is a Public Service Announcement (PSA) to provide my opinion (and my wife's) on the Austin-area builder named Redbud Custom Homes (and its owner Michael Alwan). In short: we think they suck more horribly than any other business that we've ever interacted with.

On their since-deleted Facebook business page, I wrote my thoughts:
Completely unprofessional. Poor communications, poor scheduling, poor follow-through, incomplete work, requires nagging, ... on and on. Michael is personable, but (IMO) does not have the skills to run a business and to *follow through* on completing the home we contracted him to build.
I have countless stories of failure, so I don't even know where to start. Ask a question if you're actually considering working with Redbud. I'll provide my own experience.
My wife has also written a series of Facebook posts about our problems, detailing her opinions on the matter:

She has prepared a long review for Houzz (unpublished, at this time), but the abbreviated review on Yelp still provides a lot of material for reading.

The anger and frustration that we feel is only partly described by the above posts. After over a year of neglect, we filed a complaint with the Better Business Bureau; it is now published on their website. A small thing, but symbolic of the overall failure to execute.

If you are thinking about contracting with Redbud Custom Homes, then please read our thoughts to help form your own opinion.

I hope this PSA saves somebody from similar pain in the future. Good luck.

---
Updated 12/15/2016: added a couple more links, some labels, BBB complaint, and noted owner/proprietor as Michael Alwan
Updated 12/20/2016: link to Yelp review

Wednesday, October 07, 2015

Value of ASF Projects

Matt Asay wrote an interesting piece last week, that took a rough stab at the "worth" of Open Source code under the care of the Linux Foundation. All the right caveats are there, of course: this isn't really the "worth" of the code, but an approximate cost in developer-years to produce that many lines of code. Fair enough, but when the number that pops out is $5 billion, that says something awesome. No matter how you may want to fiddle with the methodology, there are very few companies on the planet that can or have produced that much code.

Then he threw out the question: does the code under the umbrella of the Apache Software Foundation have that beat? It made me curious ...

I went to OpenHub and got its list of 340 Apache projects. For each project, I fetched the "lines of code" dataset used to produce a project's chart of LOC over time. After some edge case rejects, I had LOC for 332 projects at Apache, that OpenHub knows about. The result?

The ASF represents 177,229,680 lines of code, compared to Linux Foundation's 115 million.

So yes, by this crude measure, the ASF is "worth" something like $7.5 billion.

Talk amongst yourselves...

(obviously, I didn't use Wheeler's COCOMO model, but how far off could the value be on such a large/varied dataset? I think it's also interesting that the ASF provides a space for all this to happen with a budget of only about $1 million a year)

Sunday, September 20, 2015

GPASM object files

As part of the work on my home automation system, I've been doing a lot of assembly programming for the PIC16F688. That is my chosen microcontroller for all the various embedded systems around the house.

One of the particular issues that I've run into, is that I've divided the code into modules (like a good little boy). The gputils toolchain supports separate compilation, relocatable code, and linking. SWEET! But this is assembly code. I can't instantiate the I2C slave or master code for a particular pair of pins on the '688. There are tight timing loops, so the code must directly reference the correct port and pin (as opposed to variably-defined values).

One of my control boards talks to TWO I2C busses, and can operate as both slave and master on both busses. Since I must directly reference the port/pin, this means that I need separate compilations of the assembly code for each bus. And then I run into the problem: symbol conflict.

My solution is to rewrite symbols within the library modules for each bus instantiation. So the "start" function for the I2C master (I2C_M_start in the library's object file) is rewritten to HOUSE_I2C_M_start and LOCAL_I2C_M_start.

This works out really well, though I just ran into a problem where one library refers to another library. Not only do I need to rewrite the entrypoint symbols, but also the external reference symbols.

All of this rewriting is done with some Python code. The object files are COFF files, so I wrote a minimalist library to work with GPASM's object files (rather than generic COFF files). Using that library, I have a support script to add prefixes like HOUSE_ or LOCAL_.

Here are my support scripts:


    If you're dealing with PIC object files, then maybe the above scripts will be helpful.

    As an aside, I find it rather amusing to go back to assembly programming days, yet find myself still enmeshed within libraries, object files, and linkers.

    Saturday, August 22, 2015

    My Google Code projects have moved

    Back in March, Google announced that the project hosting service on Google Code was shutting down. I wrote a post about why/how we started the service. ... But that closure time has arrived.

    There are four projects on Google Code that I work on. Here is the disposition of each one:
    serf
    This has become Apache Serf, under the umbrella of the Apache Software Foundation. Justin and I started serf at Apache back in 2003. Two people are not sufficient for an Apache community, so we moved the project out of the ASF. We had a temporary location, but moved it to Google Code's project hosting at the service's launch, where it has resided for almost 10 years. The project now has a good community and is returning to its original home.
    (link to: old project site)

    pocore
    This is a portability library that I started, as a tighter replacement for APR. Haven't worked on it lately, but will get back to it, as I believe it is an interesting and needed library. I've moved it to GitHub.

    ezt
    This is a very old, very simple yet capable, and mature templating library that I wrote for Python. It is used in many places due to its simplicity and speed. Also moved to GitHub.

    gstein
    This is my personal repository for random non-project work. I open source everything, even if it might not be packaged perfectly for use. Somebody might find utility in a block of code, so I keep it all open. The code in this repository isn't part of a team effort, so I'm not interested in the tooling over at GitHub. I just want an svn repository to keep history, and to keep it offsite. For this repository, I've chosen Wildbit's beanstalk, and the repository has been published/opened.
    (link to: old project site)
    I'm sad to see Google Code go away, and I don't consider the above movements ideal. But it's the best I've got right now. Flow with the times...

    Saturday, March 14, 2015

    Sigh. Google Code project hosting closing down

    Google has just let us know that Google Code's project hosting will be shutting down.

    On a story over on Ars Technica, there were a lot of misconceptions about why Google chose to provide project hosting. I posted a long comment there, but want to repeat that here for posterity:

    As the Engineering Manager behind Google's project hosting's launch, I think some clarifications need to be made here. 
    In early 2005, SourceForge was not well-maintained, it was hard to use, and it was the only large hosting site available. Chris and I posed the following question, "what would happen if SourceForge went dark tomorrow". … F/OSS apocalypse. SF would take 10's of thousands of projects down with it. This wasn't too far-fetched, given the funding and team assigned to SourceForge.net at the time. Chris and I explored possibilities: provide Google operational support, machines, or just offer to buy it outright. … Our evaluation was: we didn't need to acquire SourceForge. We just needed to provide an alternative. Provide the community with another basket for their eggs. 
    Myself and three highly-talented engineers put together the project hosting from summer 2005, to its launch at OSCON in July 2006. We let SourceForge know late 2005 what we were doing, and they added staff. We couldn't have been happier! … we never set out to kill them. Just to provide safety against a potential catastrophic situation for the F/OSS community. 
    Did GitHub provide a better tool? I think so. But recall: that is their business. Google's interest was caretaking for the F/OSS community (much the same as the Google Summer of Code). The project hosting did that for TEN YEARS. 
    I'm biased, but call that a success. 
    There are many more hosting options today, compared to what the F/OSS ecosystem was dealing with in 2005 and 2006. I'm very sad to see it close down, but I can understand. Google contributes greatly to F/OSS, but what is the incremental value of their project hosting? Fantastic in 2006, but lower today. 
    … I hope the above helps to explain where/how Google Code's project hosting came about.

    Thursday, January 15, 2015

    Disappointing

    I've been reading Ars Technica for years. The bulk of what they do: I find awesome.

    A recent article used the phrase "Climate Denial" in its title. To me, in terms of the scientific method, there is no such thing as "denial", but simply "critical" or "questioning" or "not convinced". "Skeptical", if you will. All of these labels are fine, as they acknowledge that the hypothesis in question (AGW) is being tested. But "denial" has been used to shut down conversation, as if critical examination is no longer allowed.

    So I posted my thoughts, in the forum attached to that article, basically repeating the above.

    Ars Technica appears to have disliked my points about questioning. and that falsifiability is no longer applicable to AGW. So they closed my forum post, marking it as "trolling".

    The ridiculous thing is that somebody even replied to my post, pointing out "scientific consensus" on Wikipedia, yet that article specifically discusses that certain theories can never be proven. Only disproven (ref: falsifiability, above). So when you find a hypothesis in this pattern... the approach is to disprove.

    But nope. Ars Technica shut me down.

    I will still read you, Ars. I like your content. But when you shut down discussion? And call it trolling, despite some kind of rational basis, and an attempt at civil discussion?

    No. That is wrong, and I have lost respect for what you do.