Home Assistant DOS’ed my NAS

Posting this because I had a hard time finding it elsewhere, and wanted to boost the visibility.

Tried to log in to my Synology NAS tonight and got a message that said “You can’t, the disk is full.” Seemed unlikely to me, I had just been logged in earlier and had a terabyte available. I generate a lot of crap, but a terabyte in less than a day seemed suspicious.

I was able to get in via SSH, poked around. The /volume1 folder still had plenty of free space, so I began poking into the rest of the folders. Most were maybe a few Mb… but var was 900Mb. I dig down a few layers and get to /var/lib/synosmartblock was most of it. A few places on the net said it was safe to delete that, and the problem would go away after a reboot. SO, I tried and it did.

All good, right?

I reboot, log in through the web UI and notice everything is slooow. CPU in 99%. Hm. Back to SSH, back to synosmartblock. It’s huge again. Something is clearly causing it. I take a look at the results of ps -A.

There’s what can only be described as a shit ton of processes running. And a lot of them are auth.cgi. My NAS is not exposed outside my home network, so I’m thinking it’s not hackers. (I hope not, at least.) A search for synology and auth.cgi turns up this link:

100% cpu, bunch of auth.cgi : synology – Reddit

Yeah, that sounds right. And the redditor noted it was Home Assistant having lost it’s mind and decided to ping the NAS for status. Damn near continuously. The NAS couldn’t keep up and filled its system folders. And, since Home Assistant was waiting for a response on each request, it too was inaccessible. Reboot, remove the Synology integration in Home Assistant and we’re happy again.

★ Waze and Routing around traffic

When I first started using Waze, it seemed to be very good at saying “hey, there’s a traffic jam ahead, try this other route.” Over the past few months, though, it’s felt like that’s no longer the case. Traffic needs to have built up for a while before Waze will consider other options. No matter how many reports of heavy traffic there are, if it’s hasn’t been there for a while, you’re going to be headed right through it. I guess I just wish it was quicker to reroute.

★ Oh, Facebook

Ohhh, Facebook. You keep giving me reasons to think about giving you up.

This week, it’s this article from Ad Age

the social network will not be honoring the do-not-track setting on web browsers. A Facebook spokesman said that’s “because currently there is no industry consensus.”

So, let me get this straight… people set a switch on their browser that says “do not track me” and Facebook decides that because there’s no agreement as to what that means, they can ignore it? Maybe a better approach would have been to say that all these people have intentionally set this flag, and even though we can’t all agree on what this means, we’ll honor what the users intentions are. But that’s not Facebook’s way. Time and again, they have proven that they’re not interested in what the users want (and take action to prevent Facebook from doing), they’re going to do whatever suits Facebook best and screw the users. 

it looks like IE10 and Firefox are set to “do not track” by default, which could be used to counter my above argument that people are setting it. Facebook could be the bigger entity here and go along with it, or they could just ignore it for those two browsers. 

The best part, though, when I search Google for “Facebook do not track,” the fourth hit is a note from Facebook from almost 3 years ago. From that article:

If you want to be anonymous online, three of the four major Internet browsers now let you send a “Do Not Track” signal. We respect that signal and won’t track your surfing, but too many companies don’t respect “Do Not Track,” including the largest online — Google. 

Google’s Chrome is the only major browser not to include an adequate “Do Not Track” setting and Google’s web sites don’t respect your “Do Not Track” signals.

Google’s doing the same thing, but at least they didn’t flip-flop on the issue to whatever’s most convenient for them. 

Guess it’s time for another privacy lockdown on Facebook. 

★ Manilla Closing – or Let Me Pay For Your Service

[Manilla is closing down as of July 1](https://www.manilla.com/announcement/). It was a good service, but I’m not entirely surprised. I tried it for a while before switching to [FileThis](https://filethis.com/). I think FileThis is better for two reasons.

One is because it let me automagically download everything to Dropbox. While Dropbox could (theoretically) go away at any time, I’ll still have the local copies of my data.

The other thing FileThis has going for it is it’s not free. Ultimately, servers and bandwidth and employees cost money. I feel much safer relying on a service I can pay for, knowing I’m supporting my usage. IFTTT, I’m looking at you.

★ IFTTT iOS Photo recipes will upload ALL your photos

IFTTT on iOS used 7.5GB of cellular data – iphone usage | Ask MetaFilter:

An IFTTT user found that the IFTTT app used 7.5Gb of data… despite only having two recipes (a weather one and a photo screenshot one). When he inquired of them as there wasn’t any corresponding huge chunks of data he could see, IFTTT’s twitter responded:

So now I’m wondering are there other IFTTT channels that will beat the heck out of my cellular data plan?  

I love IFTTT, and I hope they correct this quickly because right now this just reeks of being a stupid decision on their part. 

Update: As I was typing, I looked – both the Android and iOS Photo channels now say that they upload all the photos. Not sure if it said that before though. 

★ Nomorobo Stops Annoying Robocalls and Telemarketers, Once and for All

>Nomorobo stops those unwanted calls in their tracks, so you don’t have to put up with them any longer.

Interesting approach to solving robocalls if your phone provider supports simultaneous rings. Vonage apparently does so I guess I’m going to give it a try.

Via [Lifehacker](http://lifehacker.com/nomorobo-stops-annoying-robocalls-and-telemarketers-on-1573558778)

★ Quick Tracking for Quantified Self

This one is nerdy, even for me. Consider yourself warned.

I wanted to start tracking some data points. I was using FitBit for step and weight tracking, MyFitnessPal to track calories, and mood and environment using Reporter. But there were other things. What time does my daughter’s bus arrive in the morning? How often did I use my rescue inhaler? Neither were things where I needed to provide any more input beyond “event happened”. I’m slowly consolidating my data into a Google Docs spreadsheet, and so I decided these two could go in there as well. The spreadsheet has columns for time, type of record (event, weight, location, etc), a description, and latitude and longitude. 

Bus Arrivals

First up was the bus arrival. While my daughter gets on the bus, I tap an action in Launch Center Pro. That action hits the following URL:

drafts://x-callback-url/create?text=event%20%7C%7C%7C%20bus%20arrival&action=ifttt&x-success=launchpro://

Basically, it tells Drafts to create a draft of “event ||| bus arrival” and call the ifttt action in Drafts, then hand control back to Launch Center Pro. 

The ifttt action in Drafts looks like this:

drafts://x-callback-url/create?text={{!ifttt}}%0A[[draft]]&action={{List in Reminders}}&afterSuccess=Delete

This takes the text of the draft and dumps it into Reminders in the ifttt list. Once it’s done, it deletes the draft, so things stay tidy. 

Finally, I have an IFTTT recipe that takes items in the ifttt list in Reminders, and adds them to my lifelog Google spreadsheet, prepended by the creation date. That recipe can be found at https://ifttt.com/recipes/160434-lifelog-text-via-reminders

Rescue Inhaler

For this, I thought I’d add a location aspect, because while the bus always stops in nearly the same place, I might need my inhaler anywhere. If I see a pattern, maybe I should avoid those places. So, I inserted Pythonista into the equation to get the location data. First up, the Lauch Center Pro Action:

pythonista://ifttt?action=run&args=rescue_inhaler

This calls the Pythonista script ifttt.py with the argument “rescue inhaler”. I won’t pretend it’s a good script, but it works. I borrowed a lot of it liberally and if someone recognizes their code here, please let me know so I can give credit for it. Any issues in it are my doing. 

—-

import console
import clipboard
import sys
import webbrowser
import urllib
 
def getLocation():
import location
import datetime
import time
“Returns current location and timestamp in readable format”
rating = sys.argv[1]
location.start_updates()
time.sleep(3)
current = location.get_location()
location.stop_updates()
address = ‘event|||’ + rating + ‘|||’ +  repr(current[‘latitude’]) + “|||” + repr(current[‘longitude’])
loctime =  address 
return loctime
 
# clipboard.set(getLocation())
draftUrl = getLocation()
clipboard.set(draftUrl)
draftUrl=urllib.quote_plus(draftUrl)
clipboard.set(draftUrl)
draftUrl = “drafts://x-callback-url/create?text=” + draftUrl + “&action=ifttt&x-success=launchpro://”
 
clipboard.set(draftUrl)
webbrowser.open(draftUrl)
—-

It takes the incoming argument (“rescue inhaler”), gets the latitude and longitude where the phone is now, and mashes it all together with a triple pipe separating them. (The triple pipe is how IFTTT knows how to separate the columns on text headed to Google Spreadsheets.) It then calls the ifttt Drafts action noted above, and the process goes out as described before.

★ Scanned Automation with Hazel, Revisited

You may recall I was scanning documents and using Applescript to extract a highlighted date from them. Since then, Hazel has started natively supported date matching in its rules, which has been great. Tell Hazel the date format and maybe some text just before or after it os it can tell start date from end date, and it gets it right. Most of the time. 

Sometimes, though, things go sideways. I can see the text in the document, but for whatever reason, Hazel isn’t matching. I ended up creating a service via Automator which extracts the text from the PDF and puts it up on the screen. Most times, I can then find the text I want and put it into the Hazel rule, and away we go. 

This project is why I hadn’t posted much the past few weeks – I’ve been running a backlog of scanned PDFs through a bunch of rules to make sure I’ve got enough of it covered. Right now, I’ve got about 40 rules to identify the company that sent me the bill, and another slightly-less than that to identify the dates. 450 items have gone through the process, so I’m fairly confident that it’s working. A few more tweaks – tagging for things like “bills that aren’t automatically paid online” and watching for social security numbers, and I should be in good shape.