Thoughts and Deeds

Unbowed Unbent Unbroken

Sending New Subreddit Posts to a Slack Channel

This is my first time using Praw so when I was reading up on how to use it I came across Build a reddit bot that was pretty much what I wanted. So I used most of their code for the Reddit side of things.

You can find the source for this on Github: Reddit Slack Bot

I recently became the moderator of the Learn Ruby on Rails subreddit. I want to build out the subreddit and add relevant information on setting up a dev environment, Ruby and Rails tutorials, and foster a community of learning.

To that end I created a free slack team for the subreddit so teachers and learners could come together. Invite yourself to our Slack team!

(I wrote this script in Python and yes, I get the irony of using Python for a Ruby based subreddit.) :)

I thought it would be a good idea to have posts made to the subreddit show up in the Slack channel. I’ve seen the equivalent for subreddits that have IRC channels and I didn’t think it would be very hard to accomplish. And it really wasn’t. Less than 50 lines of code!

There’s two libraries we’re going to need to install via pip.

1
2
pip install praw
pip install slackclient

We’re also going to need the os library but that’s a core part of Python, so nothing to install there.

After we’ve installed those via pip we can start our script.

Import them into our script

1
2
3
import praw
import os
from slackclient import SlackClient

Connect to slack

For this we’ll need a token. You can create an API token for your team in your Slack settings.

1
2
token = "SLACK_API_TOKEN"
sc = SlackClient(token)

Create user agent for Reddit api

Reddit requires you to give the name of a user agent to connect with. It can pretty much be anything (I think) so I gave mine a fairly descriptive name.

1
user_agent = ("LearnRoR-Slack Bot 0.1")

Create connection to reddit

1
r = praw.Reddit(user_agent = user_agent)

Storing id’s of submissions in a text file for now.

That should probably eventually be stored in a database, probably sqlite.

First check to see if the file exists, if it doesn’t create an empty array to store the submission ids

1
2
if not os.path.isfile("posts_replied_to.txt"):
  posts_replied_to = []

If the file does exist then read it in, split on the new line characters and filter out None as it’s possible to have a blank file.

1
2
3
4
5
else:
  with open ("posts_replied_to.txt", "r") as f:
      posts_replied_to = f.read()
      posts_replied_to = posts_replied_to.split("\n")
      posts_replied_to = filter(None, posts_replied_to)

Praw call to connect to our subreddit of choice!

1
subreddit = r.get_subreddit("ENTER_SUBREDDIT_NAME")

Loop through the new submissions. We’re only grabbing 5 each time

1
2
3
4
5
6
7
for submission in subreddit.get_new(limit=5):
  #Does our submission already exist?
  if submission.id not in posts_replied_to:
      # If not, let's make our call to slack to post the submission and store
      # the submission.id in our array of posts we've replied to
      sc.api_call("chat.postMessage", username="new post bot", channel="#ENTER_SLACK_CHANNEL", text=submission.permalink, unfurl_links="true")
      posts_replied_to.append(submission.id)

Open our text file and write out the new submission.id so we don’t post it again.

1
2
3
with open("posts_replied_to.txt", "w") as f:
  for post_id in posts_replied_to:
      f.write(post_id + "\n")

Miscellaneous

Reddit also asks that you only hit their api once every 30 seconds. To monitor the new submissions I set a cron job to run this every minute. I tested this by letting the newest 5 submissions get posted to slack, then I created a test post to make sure it was posted. When it showed up in our Slack channel I knew we were set!

Source

Want to Get Email Updates From a Store That Doesn't Provide Them?

I’ve been curious about learning leathercrafting for a while now but I haven’t known where to start. I was suggested to check out a Society for Creative Anachronism but I couldn’t find anything local.

I found the very helpful Leathercraft subreddit and through them I discovered Tandy Leather.

The cool thing about Tandy Leather is they provide classes on leatherworking! Some are free and some cost money but they are priced reasonably. A listing of their stores and the upcoming classes are found at http://www.tandyleather.com/en/leathercraft-classes.

Tandy does have an email but it’s not store specific so I won’t get the upcoming courses for the store closest to me (Richmond, VA). There is way to solve this though!

We’re going to scrape the page, grab the information for the Richmond store, email it to ourselves, and keep the script on an internet connected server that will allow us to set a cron job to run this weekly.

We’ll review how to do this in Python and Ruby.

Python

The script for this is very short and easy. For scraping we’re going to use BeautifulSoup. And to email the results to ourselves we’re going to use Mandrill.

We’re going to use the great requests library to make the connection to the page then put the response into a BeautifulSoup object.

At a command line (preferably in a virtualenv) let’s install what we need for now.

1
2
3
pip3 install beautifulsoup4
pip3 requests
pip3 html5lib

Then in our script let’s add:

1
2
3
4
5
6
7
import requests
import html5lib
from bs4 import BeautifulSoup

# using the html5lib parser instead of the typical 
# html.parser
soup = BeautifulSoup(r.text, 'html5lib')

Next we want to find the Richmond information on the page. The html for that looks like:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
<a name="id_631"></a>

<hr>

<div class="store-class">
          <div>
      <span><strong>Richmond, VA #138 Leathercraft Classes</strong></span>
  </div>

  <div class="store-class-content"><p>
November 28, 2015 9:00A.M. - 4:00P.M.</p>
<p>
Black Friday Sale! No class. Look for our E-mail Blast, give us a call, or stop in the store early to check out our pre black Friday specials.</p>
<p>
&nbsp;</p>
<p>
December 5, 2015 10:00 A.M. - 1:00 P.M.</p>
<p>
Shadow Box: Come join us as we will be making a creative display case for small knickknacks, keepsakes, or leave them empty for use as wall decorations. We&rsquo;ll be cutting, forming, and adding your own personal flair to these elegant boxes.</p>
<p>
Cost: Retail $25, Gold/Elite $20</p>
<p>
&nbsp;</p>
<p>
December 12, 2015 10:00 A.M. &ndash; 1:00 P.M.</p>
<p>
Basic Carving: Have you recently starting or want to begin carving/tooling leather? Learn what the Basic 7 tools are and how to use them. We&rsquo;ll be carving a Sheridan style flower, which will utilize all seven tools.</p>
<p>
Cost: Free</p>
<p>
&nbsp;</p>
<p>
December 19, 2015 10:00 A.M. &ndash; 12:00 P.M.</p>
<p>
Valet Tray: need somewhere to toss your keys or change at the end of a long day? A leather Valet tray is the perfect solution. This classy catchall for the bedroom night stand or hallway/living room table. Personalize to make it your own. We&rsquo;ll be using snaps on it so it&rsquo;ll lay flat when not in use.</p>
<p>
Cost: Retail $25, Gold/Elite $20</p>
<p>
&nbsp;</p>
<p>
On December 18 &amp; 19, join us for our first Super Saturday Sale. Everything in the store will be on sale. You really don&rsquo;t want to miss this.</p>
</div>

  <div>
      Please contact store for class details.<br>
      Richmond138@tandyleather.com<br>
      Phone: 804-750-9970<br>
      Toll Free: 866-755-7090<br>
      9045 W. Broad St, #130<br>
      Henrico, VA 23294<br>
  </div>
</div>

Looking at the html the only identifier for the Richmond block is <a name="id_631"></a> which doesn’t exist in the class store-class and there is no unique identifier for Richmond inside the div.store-class So what we have to do is identify the name="id_631" then grab the next div whose class is store-class.

We can do that with one line of code courtesy of BeautifulSoup.

1
rva = soup.find("a", attrs={"name": "id_631"}).find_next_sibling("div", class_="store-class")

That line finds the a whose name is id_631 then finds the next sibling that is a div whose class is store-class.

And that will give us the entire div with all of the information we want!

But there is a caveat. The object is a BeautifulSoup object so we’re going to need to convert that to a string.

1
rva = str(rva)

That’s it!

We’ve got the information now we just need to get it to us.

For that I’m using Mandrill. Mandrill is a transaction email service made by Mailchimp. Just like MailChimp there is a free tier that will be more than enough for us.

Go to Mandrill, sign up, and get developer key.

At a command line let’s install the mandrill library.

1
pip3 install mandrill

Now to make our connection:

1
mandrill_client = mandrill.Mandrill("ENTER_API_KEY")

And let’s build out our message.

1
2
3
4
5
6
message = {
  'html': rva,
  'subject': 'Upcoming classes at Tandy Leather',
  'from_email': 'ENTER_FROM_EMAIL_ADDRESS',
  'to': [{'email': 'ENTER_TO_EMAIL_ADDRESS'}]
}

What we’re doing here is setting the html body of the message to the information we scraped from the web page. We give our email a subject, set the from email address, and the to.

Then we make a call to messages.send:

1
result = mandrill_client.messages.send(message=message)

There is a return result so if you don’t get the email you can output result and see what error you received.

Our final script looks like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
#!/Users/jwhite/python_projects/tandyleather/env-tandyleather/bin/python

import requests
import html5lib
import mandrill
from bs4 import BeautifulSoup

mandrill_client = mandrill.Mandrill("ENTER_API_KEY")

r = requests.get("http://www.tandyleather.com/en/leathercraft-classes")

soup = BeautifulSoup(r.text, 'html5lib')

rva = soup.find("a", attrs={"name": "id_631"}).find_next_sibling("div", class_="store-class")

rva = str(rva)

message = {
  'html': rva,
  'subject': 'Upcoming classes at Tandy Leather',
  'from_email': 'ENTER_FROM_EMAIL_ADDRESS',
  'to': [{'email': 'ENTER_TO_EMAIL_ADDRESS'}]
}

result = mandrill_client.messages.send(message=message)

Notice I did include the location for where our python is located. This is required on our server to set a cron job to automatically email ourselves the information.

A cron tutorial is a little outside the scope of this post.

I did promise to show how to do this in Ruby also didn’t I?

Ruby

The Ruby script is VERY similar to the Python version.

We’re going to need to install the following gems:

1
2
3
gem install mandrill-api
gem install httparty
gem install nokogiri

We then require our libraries:

1
2
3
require 'mandrill'
require 'nokogiri'
require 'httparty'

We use httparty to get the page:

1
response = HTTParty.get('http://www.tandyleather.com/en/leathercraft-classes')

Put that into a nokogiri object:

1
doc = Nokogiri::HTML(response.body)

Create our connection to mandrill

1
mandrill = Mandrill::API.new 'API_KEY'

We find the section we’re looking for, create a new empty string to hold the information, but then there’s an extra step with nokogiri.

1
2
3
4
5
6
7
8
id_631 = doc.css("a").select{|link| link['name'] == 'id_631'}

rva = ''

id_631.each do |item|
  hr = item.next_element
  rva = hr.next_element.inner_html
end

We have to loop over our id_631 to get the next element, but that’s an <hr>, so we have to get the next element after that which is the div.store-class.

We build the message the same way as before but using Ruby hashrocket syntax.

1
2
3
4
5
6
7
message = {
  "html" => rva,
  "subject" => "Upcoming classes at Tandy Leather",
  "from_name" => "Jody White",
  "to" => [{"email" => TO_EMAIL}],
  "from_email" => FROM_EMAIL
}

Then make the call to mandrill to send.

1
result = mandrill.messages.send message

The entire script is:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
#!/Users/jwhite/.rbenv/shims/ruby

require 'mandrill'
require 'nokogiri'
require 'httparty'

response = HTTParty.get('http://www.tandyleather.com/en/leathercraft-classes')

doc = Nokogiri::HTML(response.body)

mandrill = Mandrill::API.new 'API_KEY'

id_631 = doc.css("a").select{|link| link['name'] == 'id_631'}

rva = ''

id_631.each do |item|
  hr = item.next_element
  rva = hr.next_element.inner_html
end

message = {
  "html" => rva,
  "subject" => "Upcoming classes at Tandy Leather",
  "from_name" => "Jody White",
  "to" => [{"email" => TO_EMAIL}],
  "from_email" => FROM_EMAIL
}

result = mandrill.messages.send message

And that’s it!

Summary

This was a quick and fun project to put together. And useful! I set my cron job to run every Sunday morning at 8am. I tested out my cron job and it was working when I set it a minute into the future, but the real test will be tomorrow morning!

Horrible Best Buy Experience

Here’s what happened:

On Oct 15th I ordered a dishwasher from the store in Charlottesville VA. I was told it would be delivered on Oct 26. It was delivered but was defective and didn’t work.

I go back to the store and did an exchange. I was told the new dishwasher would be delivered on Nov 2. The install team called me the night before and scheduled a time. But on the delivery day no one showed or called. I called the delivery team and they said someone from the store was suppose to call me to tell me the store didn’t actually have the dishwasher.

I went back to the store wanting to do a return but one of the managers upgraded me to the next best model free of charge and showed me they had 25 dishwashers in stock at a warehouse and promised me I would have it in a week.

So we scheduled a delivery for Nov 9. The night before the install, the team called me and scheduled a time. The install team called me the next morning and told me the store once again didn’t have the dishwasher.

I go to the store and one of the low level managers immediately does a refund and has the install team come get the defective dishwasher. He did this as I walked in the door, I didn’t have to say anything, I hadn’t talked to him before so I’m guessing the previous manager I talked to told him the story, but then for whatever reason the previous manager wouldn’t come talk to me.

Thanksgiving day I was reviewing my checking account for the month of November and see Best Buy never refunded my money from the dishwasher return.

I go to the store, I know they’re going to be busy, but I get there before the store opens so they’re not dealing with the rush of customers in the store. I talk to an employee outside, describe to him who did the return and he goes to get that guy. The employee comes back alone because the manager I last talked to is now refusing to talk to me.

They say they’re too busy, that they can’t talk to me. I understand that. But I mean, they owe me $500 because of a mistake they made. I asked for the store manager or district manager’s name and the employee refused to give those to me. He wasn’t wearing a name tag and he wouldn’t give me his name.

Then he started telling me that this is my fault for not realizing they didn’t do the refund sooner and if I had waited this long I could wait until next week. And that I must not need the money if I hadn’t noticed before. He said if this was really important I would have noticed sooner and since I didn’t see they never refunded the money it must not really be that important. Some of the stuff he told me didn’t make sense. He told me banks were closed so they couldn’t do the refund, but banks are closed on Saturday and Sunday every week and Best Buy processes refunds on those days all the time.

How that employee talked to me is what really upsets me, frustrates me, and makes me angry. Best Buy has messed up this transaction from the very beginning yet this employee makes it sound like THEM not refunding the money is trivial and perhaps even my fault. I was livid. I didn’t expect to get much traction on Thanksgiving but I thought I would try, but for them to turn around and make this my fault is just ridiculous.

Checking Inventory at Your Local Barnes and Noble

Does anyone really shop or buy at local brick & mortar stores anymore? No? Well, this was still a fun app to make.

A few days ago someone in a forum I frequent posted that they keep a list of bookmarks to bn.com that told them stock levels for their local BN but something had changed and the link was no longer working.

What he had to do now was go to the specific item page, click a button that would open up a search by zip box, enter the zip, then get a modal telling him if it was in stock. He wanted to know if there was a way to link directly to the pop-up.

I did a search for The Martian by Andy Weir, poked around the item page and the modal using Chrome dev tools and didn’t see a way to link directly to the modal but I did find something interesting!

While they only told you if the item was In Stock or not the return response provided you with quite a bit of other information! Including the number the store has on hand.

images

BN is requiring custom headers to get to the response via http but if you right click on the Name of the file in the Network tab of Dev tools you can choose Copy as cURL. Paste that into a terminal window and you get the json response back for that item!.

1
curl 'http://stores.barnesandnoble.com/ris/inventory?ean=0670541597682&page=1&pageSize=5&zipcode=22911' -H 'ris-zip: 22911' -H 'Accept-Encoding: gzip, deflate, sdch' -H 'Accept-Language: en-US,en;q=0.8' -H 'ris-chash: e0a986c0f4f2defbe1218df7f050dcb1' -H 'ris-appsig: bc9a6da9d84f82a1048b52e058a0eff7' -H 'Cookie: TS01e75984_76=08ea36dc92ab2800efd856ac2f7d30201365d091bb47de8b2d3be5964994f0f6c5a50db942b6f40f434d0d2c7c77c48608f8b9c31a87f800ef62a9c7eac1075345cef3282d782d18aa54589386a7c3608f3fc60cecf03f5ffe4bc3487d55554999902bcb466ef9f5f485e875c1e8d44835339daf27f4ab6c7690714708a7c4874897b81ce3feaeb51db92d10aac5c26bc285fee97bcc39f78f695c07e88e24d1e51edb2e7c55aecf101d23559ac2985478ad9cf6c98df8c558d2991018acdf217493b70dd8f7f8879242cbc60b256687e1bb2b6a3e0400023026e1f14e3971213eff1d8f6bbbac0c0020bcae0d515a02096c1850538bf79510ac65bea231ee0545079a706bdbb051e519afb8701270265d72a2543cb49ba2060e39eb18b3789765d95eed4f288a4169202b31ff1a8fd6; JSESSIONID=Jqkh0Ej4GrQyRU0l22mi7Gmv; TS01cdc16d=01b22697604b1912c5bdc420b66e39c850afb865bd6dc3ef4749d23d80be419f7ed6d20da0450efdf480fa9cb6b55d2ed776258918; DeviceType=Desktop; showSiteAs=Desktop; client-profile=Desktop; TS01af0a8e=01b2269760af1d36cb4fb5b7ca692976712a5a4967599fe0d92b4d461161ec5546daabbaf985f7aee29c78c510273b9641b2400b03ba398c602fa8f53a55c470ada9317dc274f5309eb70a67d500b94d8bc3967ea9; AMCV_9A223704532965F10A490D44%40AdobeOrg=1256414278%7CMCMID%7C87343886875634417124019563558639690053%7CMCAAMLH-1436405345%7C7%7CMCAAMB-1436405345%7CNRX38WO0n5BH8Th-nqAG_A%7CMCAID%7CNONE; BIGipServerwby-fe-cseap-pool_8080=!r4eYi3prBznUjIadz57UnRO+XgUP9GBXYJfEiZiVWLXTTCnzTWp5rZ04wSAzFGUwaHq6wRb8mxEIvSY=; BIGipServerwby-fe-caiws-pool_80=!Z9RlgWru6D6gob6dz57UnRO+XgUP9L0vERSL4mYexaoUiHw4Fi8ZummU6q36Ngq8oou4t0cx+H0TbBU=; __gads=ID=afd12464bf4e9ede:T=1435934093:S=ALNI_MY-qv9PAulhhBNeDsi8gI_pQrJPUg; __qca=P0-1566046451-1435934094100; s_vnum=1438401600945%26vn%3D7; ADRUM=s=1435948949240&r=http%3A%2F%2Fwww.barnesandnoble.com%2Fs%2Fdead%2Bof%2Bwinter%3F-119814330; _ga=GA1.2.396886361.1435800545; s_cc=true; RT=; s_invisit=true; s_depth=10; s_lv=1435949667207; s_lv_s=Less%20than%201%20day; TS01e75984_27=014a7c9820ca1a3947ea741b05132d8a885f8cea36ffe609379a58536d96879aaae5d69311f62543b4d861c7f03c05bd3c4541cf9d; TS01e75984=01b22697609a9717ceff3263c877f56b460b919f8036a719ac8691a5803d05c8f6122d12c52b2c5bebf0cdc90c0ef5a3452c8b5e7889135d6accf7433d07dc5d794d82006e' -H 'ris-email: ' -H 'Connection: keep-alive' -H 'ris-appts: 1435949667246' -H 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/43.0.2357.130 Safari/537.36' -H 'ris-price: 53.95' -H 'Accept: application/json, text/plain, */*' -H 'Referer: http://stores.barnesandnoble.com/ris/reservation.jsp?zipcode=22911&price=53.95&email=&chash=e0a986c0f4f2defbe1218df7f050dcb1&appid=bndotcom&appsig=bc9a6da9d84f82a1048b52e058a0eff7&ean=0670541597682&appts=1435949667246&' -H 'ris-ean: 0670541597682' -H 'ris-appid: bndotcom' --compressed

That’s great and all but how are we going to put this into a script to make it usable?

Heading to this page you can convert cURL to Python requests. http://curl.trillworks.com/. Paste the cURL command in, click the button, and copy your Python code!

You’ll get something that looks like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
cookies = {
    'TS01e75984_76': '08ea36dc92ab2800efd856ac2f7d30201365d091bb47de8b2d3be5964994f0f6c5a50db942b6f40f434d0d2c7c77c48608f8b9c31a87f800ef62a9c7eac1075345cef3282d782d18aa54589386a7c3608f3fc60cecf03f5ffe4bc3487d55554999902bcb466ef9f5f485e875c1e8d44835339daf27f4ab6c7690714708a7c4874897b81ce3feaeb51db92d10aac5c26bc285fee97bcc39f78f695c07e88e24d1e51edb2e7c55aecf101d23559ac2985478ad9cf6c98df8c558d2991018acdf217493b70dd8f7f8879242cbc60b256687e1bb2b6a3e0400023026e1f14e3971213eff1d8f6bbbac0c0020bcae0d515a02096c1850538bf79510ac65bea231ee0545079a706bdbb051e519afb8701270265d72a2543cb49ba2060e39eb18b3789765d95eed4f288a4169202b31ff1a8fd6',
    'JSESSIONID': 'Jqkh0Ej4GrQyRU0l22mi7Gmv',
    'TS01cdc16d': '01b22697604b1912c5bdc420b66e39c850afb865bd6dc3ef4749d23d80be419f7ed6d20da0450efdf480fa9cb6b55d2ed776258918',
    'DeviceType': 'Desktop',
    'showSiteAs': 'Desktop',
    'client-profile': 'Desktop',
    'TS01af0a8e': '01b2269760af1d36cb4fb5b7ca692976712a5a4967599fe0d92b4d461161ec5546daabbaf985f7aee29c78c510273b9641b2400b03ba398c602fa8f53a55c470ada9317dc274f5309eb70a67d500b94d8bc3967ea9',
    'AMCV_9A223704532965F10A490D44%40AdobeOrg': '1256414278%7CMCMID%7C87343886875634417124019563558639690053%7CMCAAMLH-1436405345%7C7%7CMCAAMB-1436405345%7CNRX38WO0n5BH8Th-nqAG_A%7CMCAID%7CNONE',
    'BIGipServerwby-fe-cseap-pool_8080': '!r4eYi3prBznUjIadz57UnRO+XgUP9GBXYJfEiZiVWLXTTCnzTWp5rZ04wSAzFGUwaHq6wRb8mxEIvSY=',
    'BIGipServerwby-fe-caiws-pool_80': '!Z9RlgWru6D6gob6dz57UnRO+XgUP9L0vERSL4mYexaoUiHw4Fi8ZummU6q36Ngq8oou4t0cx+H0TbBU=',
    '__gads': 'ID=afd12464bf4e9ede:T=1435934093:S=ALNI_MY-qv9PAulhhBNeDsi8gI_pQrJPUg',
    '__qca': 'P0-1566046451-1435934094100',
    's_vnum': '1438401600945%26vn%3D7',
    'ADRUM': 's=1435948949240&r=http%3A%2F%2Fwww.barnesandnoble.com%2Fs%2Fdead%2Bof%2Bwinter%3F-119814330',
    '_ga': 'GA1.2.396886361.1435800545',
    's_cc': 'true',
    'RT': '',
    's_invisit': 'true',
    's_depth': '10',
    's_lv': '1435949667207',
    's_lv_s': 'Less%20than%201%20day',
    'TS01e75984_27': '014a7c9820ca1a3947ea741b05132d8a885f8cea36ffe609379a58536d96879aaae5d69311f62543b4d861c7f03c05bd3c4541cf9d',
    'TS01e75984': '01b22697609a9717ceff3263c877f56b460b919f8036a719ac8691a5803d05c8f6122d12c52b2c5bebf0cdc90c0ef5a3452c8b5e7889135d6accf7433d07dc5d794d82006e',
}

headers = {
    'ris-zip': '22911',
    'Accept-Encoding': 'gzip, deflate, sdch',
    'Accept-Language': 'en-US,en;q=0.8',
    'ris-chash': 'e0a986c0f4f2defbe1218df7f050dcb1',
    'ris-appsig': 'bc9a6da9d84f82a1048b52e058a0eff7',
    'ris-email': '',
    'Connection': 'keep-alive',
    'ris-appts': '1435949667246',
    'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/43.0.2357.130 Safari/537.36',
    'ris-price': '53.95',
    'Accept': 'application/json, text/plain, */*',
    'Referer': 'http://stores.barnesandnoble.com/ris/reservation.jsp?zipcode=22911&price=53.95&email=&chash=e0a986c0f4f2defbe1218df7f050dcb1&appid=bndotcom&appsig=bc9a6da9d84f82a1048b52e058a0eff7&ean=0670541597682&appts=1435949667246&',
    'ris-ean': '0670541597682',
    'ris-appid': 'bndotcom',
}

requests.get('http://stores.barnesandnoble.com/ris/inventory?ean=0670541597682&page=1&pageSize=5&zipcode=22911', headers=headers, cookies=cookies)

Let’s import requests (be sure to install requests if you don’t already have it!) and add the Python code we just got. It turns out we don’t need the cookies dictionary so we’re going to remove that and at the end let’s put the return into a variable.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
import requests

headers = {
    'ris-zip': '22911',
    'Accept-Encoding': 'gzip, deflate, sdch',
    'Accept-Language': 'en-US,en;q=0.8',
    'ris-chash': 'e0a986c0f4f2defbe1218df7f050dcb1',
    'ris-appsig': 'bc9a6da9d84f82a1048b52e058a0eff7',
    'ris-email': '',
    'Connection': 'keep-alive',
    'ris-appts': '1435949667246',
    'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/43.0.2357.130 Safari/537.36',
    'ris-price': '53.95',
    'Accept': 'application/json, text/plain, */*',
    'Referer': 'http://stores.barnesandnoble.com/ris/reservation.jsp?zipcode=22911&price=53.95&email=&chash=e0a986c0f4f2defbe1218df7f050dcb1&appid=bndotcom&appsig=bc9a6da9d84f82a1048b52e058a0eff7&ean=0670541597682&appts=1435949667246&',
    'ris-ean': '0670541597682',
    'ris-appid': 'bndotcom',
}

r = requests.get('http://stores.barnesandnoble.com/ris/inventory?ean=0670541597682&page=1&pageSize=5&zipcode=22911', headers=headers)
}

Run this at the terminal and you should get the same return response we saw in dev tools.

Want to see that in a better format? At the top add from pprint import pprint.

Then we’re going to call the .json() method on the return to get a Python dictionary we can work with.

1
2
r = r.json()
pprint(r)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
{'content': {'distance': 10,
             'ean': '9780553418026',
             'firstPage': True,
             'inventoryLoadDate': '2015-07-03 04:00:00',
             'inventoryReservable': 1,
             'lastPage': True,
             'page': 1,
             'pageCount': 1,
             'pageSize': 5,
             'product': {'availableOn': 1414468800000,
                         'categoryDesc': 'Science Fiction',
                         'createdAt': 1419263528000,
                         'displayEan': '9780553418026',
                         'ean': '9780553418026',
                         'formatDescription': 'Trade Paper',
                         'pageCount': 400,
                         'price': '15',
                         'publisherName': 'Crown Publishing Group',
                         'subjectDesc': 'Sf/Fantasy',
                         'title': 'Martian',
                         'updatedAt': 1419263528000},
             'searchZipcode': '22911',
             'stores': [{'address1': 'Barracks Road Shopping Center',
                         'address2': '1035 Emmet St Suite A',
                         'city': 'Charlottesville',
                         'distance': 3.8863318,
                         'divisionId': 1,
                         'hours': 'Sun 9-9, Mon-Thurs 9-10, Fri & Sat 9-11',
                         'hoursHtml': 'Sun 9-9,<br> Mon-Thurs 9-10,<br> '
                                      'Fri & Sat 9-11',
                         'inventory': 41,
                         'inventoryDate': '2014-10-28 04:00:00',
                         'inventoryLoadDate': 1435896000000,
                         'name': 'Charlottesville',
                         'pageSize': 5,
                         'phone': '434-984-0461',
                         'reservable': True,
                         'resultId': 1,
                         'retailPrice': 15.0,
                         'state': 'VA',
                         'storeId': 2559,
                         'storeReservable': 1,
                         'zipcode': '22903'}],
             'totalCount': 1},
 'statusCode': 200,
 'statusText': 'OK'}

Inside the stores array is an inventory key that shows us the number of items that store has in stock!

Let’s grab it…

print(r['content']['stores'][0]['inventory'])

But we want it to read a little better so let’s add in some Python string interpolation:

print("There are {} in stock.".format(r['content']['stores'][0]['inventory']))

Gist of all the code: https://gist.github.com/jhwhite/3cfd02186e577db7a7e0

Summary

I’ve actually done a little bit more with this here, http://nyt-bestsellers-at-bn.herokuapp.com/.

In the next blog post I’ll go into using the NYT Books API and checking Barnes and Noble stock against that, then in another blog post I’ll write about how to put it all together using a Python microframework called Flask.

Tracking Job Applications Using Trello

I’ve had the unfortunate experience of being laid off recently due to a re-org of sorts. My typical response is to take a day to myself where I decompress from the situation, enjoy the extra time I’ll have and hit the job search hard the next day.

Other than not having a steady source of income, one of the problems I’ve encountered is how to track my applications. The first time this happened to me I didn’t do any real tracking. Toward the end of my job search I started using a web app to track the submitted applications, where I was in the process, and any notes about contacts for the posted position. But 2 years later at my next job search I wasn’t able to find the app nor have I been able to find it this time. So I had to move on to something else.

I started this job search using an Excel spreadsheet. Excel can be pretty awesome when needed but it does kinda scream “boring” right? I looked into a different web apps but decided to come up with my own workflow using Trello.

The workflow itself is very easy to set up and is fairly extensible due to the robustness of Trello.

Get yourself to Trello and create a free account.

In the upper right hand corner click the + sign and choose New Board. Give your board a name, something descriptive.

You’ll see the typical Kanban lists by default. You can’t delete a list, only archive it so you might as well rename these three lists to the first three lists we’ll use for our tracking.

Rename the first three lists to:

  • Applications
  • Application Sent
  • Phone Interview

Then in the text box Add a list let’s start adding the rest of our lists.

  • Thank you email (Phone Interview)
  • Waiting to hear back (Phone Interview)
  • Formal Interview
  • Thank you card (Formal Interview)
  • Waiting to hear back (Formal Interview)

Your final list selection will look something like this: images

I’ve set the workflow up so that you should only have to move left to right. There might be some instances where you have a phone interview following up a formal interview or a follow-up interview after your first Formal Interview, and at that point I do move the cards back and put them through the workflow again.

Now let’s review the lists.

Applications

As you find a job application you want to apply for you add it to this list. I name the card Job Title - Company. So if I were applying for a Software Engineer position at Google the name of the card would be Software Engineer - Google.

I then take the job description and paste it into the Description of the card. images

Application Sent

When I’ve finished writing an employer specific cover letter and finished tweaking my resume for that employer I submit the documents. At this point I move the card from Applications to Application Sent. The move will record the time and date of the submission but I go ahead and add a comment to the card with the Applied Date and with a link to the URL if I found the job online somewhere.

Phone Interview

More times than not the first step in the interview process is a phone interview. When I’m contacted for a phone interview I change the due date on the card to the time and date of the interview. Then I move the card from Application Sent to Phone Interview.

After speaking with the employer I add whatever information I’ve gleaned from the conversation to the comment section of the card. I also add any contact information I’ve collected and I move the card from Phone Interview to Thank you email (Phone Interview). I also delete the due date on the card.

Thank you email (Phone Interview)

It’s always good practice to follow up any kind of interview with a Thank You. So there’s a list specifically for proper interview etiquette. After I’ve sent the thank you email I move the card forward to Waiting to hear back (Phone Interview)

Waiting to hear back (Phone Interview)

Sometimes there’s no special information to add here. If I have contact information for the employer I will add a due date to the card so I can have a reminder to follow-up with them. Or if the employer gives me a timeline I will add that timeline to the due date so I can know when to expect to hear back from them and follow-up if I don’t.

Unfortunately, sometimes I don’t hear anything back from this list and archive the card. I’m an optimist so I feel keeping the card in this list for a month is enough time to hear back from someone. If I don’t hear back after a month I archive.

Formal Interview

Great! We’ve been called in to meet face to face! I move the card from Waiting to hear back (Phone Interview) to this list, set or change the due date to the date and time of the interview, add whatever comments in the section that is applicable for the formal interview and start preparing!

Thank you card (Formal Interview)

Once you’re home and your nerves have settled down move the card from Formal Interview to Thank you card (Formal Interview). Remove any due dates and add any and all notes you’ve taken from the formal interview to the comments section of the card.

We’re back to best practices for interviewing! If you’ve been called in to an interview the employer deserves more than just an email. Get the business cards of everyone that has spoken with you. If the company is within reasonable driving distance of you then you might want to consider dropping the thank you cards off at the front desk. Otherwise, do this the old fashioned way and buy some stamps and mail them in!

Waiting to hear back (Formal Interview)

After you’ve mailed or dropped off your Thank You Cards you can move the card for the job from Thank you card (Formal Interview) to Waiting to hear back (Formal Interview). This can follow the same rules as Waiting to hear back (Phone Interview). Set up a due date if you want to follow-up or if you were given a time line by the employer. It’s rare to not hear back from an employer at this stage, although it has happened to me once.

When you’ve heard back from the employer and in the unfortunate scenario they’re moving forward with another candidate, add a note to the comment section on the final decision, date it, and archive the card.

Do not delete the card as you will want to keep the contact information and notes in case you apply in the future. Or if the employer reaches out to you again.

Notes

So I’ve gone through a few iterations of this workflow and this is what works for me. If you like this but it doesn’t quite work for you…tweak it!

One of the things I tried earlier was to only have one Waiting to hear back list. Then I would use Labels to note what I was waiting to hear back from. I also included Application Sent in that list since I was waiting to hear back about my initial resume submission. But then I would move cards forward then back, then forward again. What I like about the above workflow is that for the most part it moves left to right and I can more easily see which stage the application is in.

An idea I have that I plan on doing unless someone beats me to it, and if someone wants to then please be my guest, is to create a Trello clipper extension for a browser that will clip the Job Title and Job Description from an online job posting and send that directly to Trello as the Card Title and Card Description.