Introducing the QultoftheQuantumQapybaras

QultoftheQuantumQapybaras is a newly formed all female CTF team in the Portland, Oregon area. We are a sibling team of the older QultoftheQuantumQow. Both teams are based out of the nonprofit hackerspace PASCAL. To the best of our collective knowledge, we’re currently the only all female CTF team in the area. If there are others, please let us know.; we’d love some company.

DC Quals was the team’s first contest, and the first CTF ever for the majority of the team. There were six of us playing this round and we finished #119 out of 1262 teams, putting us in the top ten percent. I’m very proud of how well the team worked together.

CANT_EVEN_UNPLUG_IT

Challenge prompt: “You know, we had this up and everything. Prepped nice HTML5, started deploying on a military-grade-secrets.dev subdomain, got the certificate, the whole shabang. Boss-man got moody and wanted another name, we set up the new names and all. Finally he got scared and unplugged the server. Can you believe it? Unplugged. Like that can keep it secret…”

It also came with a HINT file, containing the following string: ”Hint: these are HTTPS sites. Who is publicly and transparently logging the info you need? Just in case: all info is freely accessible, no subscriptions are necessary. The names cannot really be guessed. “

The Journey to the Flag

This write up is a combination of our collective notes during the solve. My teammates deserve at least as much credit as I do, especially as they took much better notes. I have elected not to edit my own failures out of the story, in the hopes of inspiring other CTFers not to be discouraged by theirs. The process is messy and that’s okay.

I initially ignored this challenge because someone else was working on it and settled on the RETURN_TO_SHELLQL challenge. At one point, a teammate asked me about certificate authorities because I used to work on Firefox. My curiosity was piqued. OK, I was stuck on shellql and needed a break. Same thing, right?

If the focus on certificates is confusing to you, please recall that the challenge mentions a certificate directly. The hint strongly links that to the certificates used by SSL/TLS for HTTPS traffic.  In this case we’re looking for a certificate issued to a url containing “military-grade-secrets.dev” subdomain.

CAs didn’t really show up in my corner of the Mozilla world unless there was drama around an organization’s application to be a CA. I did dimly remember that there was supposed to be some sort of oversight and audit trail, but not how it worked or if it still existed. As with many things, Certificate Authorities didn’t quite turn out the way the architects intended.

The hint brightened those dim recollections a bit, and I did some googling on ‘certificate authority transparency’, of which the most salient was this wikipedia article on Certificate Transparency . That closely matched my dim recollections about oversight and confirmed that the records should be publicly available. So, my foolproof plan was to find the certificate record, link it to a url, drop the url in the Wayback machine, retrieve the secret page, and find the flag.

This is where most of the wandering in the weeds happened for me. I searched around for DNS, cert records, and found a couple sites like SSLChecker, MX tools, etc . I plugged in variants of “military-grade-secrets.dev” and ooo into several of these without success. RAWR! I _know_ there are historic records! I’ve been digging for them the whole time! After about 3 minutes of seeing red, I calmed down enough to connect the dots. The records I was looking at must only be current records. A quick google for ‘archived dns’ turned up this DNS archive.

The results out of that one were more promising. A search for our key string yielded “Military-grade-secrets.dev”, “www.military-grade-secrets.dev”, “secret-storage.military-grade-secrets.dev”, “Now.under.even-more-militarygrade.pw.military-grade-secrets.dev”. None of the links worked as is. So I proceeded to the next phase of my plan: plugging them all into the Internet Archive’s Wayback Machine.  I came up entirely empty handed for all of them, thought I had failed, and started looking for another angle.

My teammate overheard my Wayback attempts, and unlike me, can use tools correctly. 🙂  The Wayback Machine turned up exactly one record for “Now.under.even-more-militarygrade.pw.military-grade-secrets.dev”. , dated May 11th. It was a redirection to https://forget-me-not.even-more-militarygrade.pw/.  

That page was down of course, the server is supposed to be unplugged after all. Of course, the Wayback Machine had a nice copy which displayed the flag on the lower half of the page in a nice large font: OOO{DAMNATIO_MEMORIAE} .  

Thus, QultoftheQuatumQapybaras’ first flag was captured.

Hacker Quapybara by http://danigrillo.com/

Artwork by Illustrator Dani Grillo


Introduction

    Please note that I did not name the project. It is a play on our respective surnames.

For several months, my research partner Mr.De4d and I have been spelunking in the depths of the Seagate firmware, whose internals are legendarily undocumented.

We have been studying a form of hard drive failure which regularly occurs in the wild in the Seagate Moose and Pharoah drive families. A seemingly undamaged drive powers on, spins up, gets stuck in busy, and is never recognized by the connected operating system. Mr.De4d has fixed several of these manually in the wild by forcing a translator rebuild. So we strongly suspect a corrupted translator in the firmware to be at fault.

Our objective is to build a tool or at least a procedure to programmatically fix the translator and make that available. The first step is to make ourselves a controlled corrupted translator to build and test code against. That is proving to be harder than anticipated. Work is ongoing at the time of writing.

Along the way, we have found a number of interesting things about the lists used by the Seagate translator. I had thought to wait until after we had finished our entire project to publish anything. However, we have received feedback that the data recovery community would really like to see what we have found so far even if it is unfinished.

A Bit of Background on Lists used in Firmware Translators

Conventionally, HDDs handle sectors going bad by building a translator that converts Logical Block Address(LBA) to a physical location, usually represented by the combination of cylinder, head, sector (CHS). Some sectors, part of the physical address, are defective during manufacture, others fail over time. The translator is responsible for taking LBAs and mapping them onto good sectors, skipping the bad ones. Conventionally it uses two-three lists to manage the types of bad sectors. Translators are expected to be rebuilt periodically to factor in changes to those lists. For a more thorough explanation of the translator and remapping, please see MJM’s bad sector remapping article.

They are:

  • G-List – Grown Defect List, list of defects accumulated over time, expected to dynamically grow over time
  • P-List – Primary Defect List, list of defects found at factory at start of device’s lifetime, expected to be static through the life of the device
  • T-List – Thought to stand for Tracks Defects List, although there are other theories. Generally believed to be dynamic.

What We Found in the Firmware

We were expecting to find three, maybe four lists when we popped open the terminal and starting poking around. We were to looking to identify and overfill one of the Grown Defects Lists to simulate the broken translator found in the wild. So we have found eight lists, and we are not certain that we have discovered them all yet.

Seagate NameTypeNotes
Alt ListG– Alt List: entries  by Logical Block Address(LBA) or sector.
– Synced with G-List.
– Changes over time as defects accumulate. Expected to be dynamic.
G-ListG– G-List: entries by sector only, unlike Alt List
– Synced with Alt List
– Changes over time as defects accumulate. Expected to be dynamic.
LBA List?– Might be an alias for the Alt List
Non Resident G-list (NRG)P– The 2nd primary list generated at the factory after the P-List is generated.
– Generated after a 2nd Self-Scan Test
– Unclear why. Expected to be static.
P-listP– “Classic” P-List; Factory defect list used to generate the translator. Expected to be static
– Generated after initial Self-Scan Test performed at factory level.
T-List?– Cylinders are referenced when contents displayed and it looks similar to the Alt List
– May be a copy of the Alt List or may be a true T-list (track defect list).
User Slip Defect List (USDL)?– May be related to G-list, entries the same, but each entry has additional information
System Slip Defect List (SSDL)?– Able to view after ‘checks T-list” cmd(s) are run, suggesting a relationship to the T-List
– Much, much smaller than the USDL

Bottom Line:

       This one does what it says on the tin in a clear, readable format that should be accessible to most of the population. Those with a technical background will need to skim large sections, thankfully the book’s clear format makes that easy.  

  1. Did I learn something?
    1. I developed a high level understanding of the digital forensics field.
    2. I learned quite a few interesting tidbits about Windows.
    3. I learned quite a bit about the legal tangles involved in digital evidence.
  2. Did I enjoy the time spent reading it?
    1. Yes. I really enjoyed the anti-forensics chapter.
  3. Would recommend to:
    1. Those studying law, studying criminal justice, or police officers in field work.  Chapter 7 on legal aspects and Chapter 10 on mobile devices would be particularly relevant.
    2. Those considering a career in forensics. The author provides many details about the ‘daily grind’ and some professional pitfalls of being an examiner throughout the book.
    3. Career changers who want to get into tech but aren’t sure that full on software development suits them and are looking for related career paths.
    4. Veteran lawyers struggling with digital evidence reports or are new to digital evidence in general. Chapter 10 on mobile device is especially relevant, although Chapter 7 is unlikely to contain anything they don’t already know.
  4. Would not recommend to:
    1. DevSecOps and Infosec folks looking to better their intrusion postmortems and guidance on how to conduct their own detection & collection, effectively forensics, during or after an attack.
    2. The technically inclined looking for a deeper insight into the software that powers digital forensics. This was very much focused on the squishier side of the keyboard.

On Content:

      In general, I found the content to be good quality and easily digestible. The presentation of the forensics process, the factors that influence its application in the digital space, and what makes for a successful investigation from start to finish stand out in particular.

      The preface of the book discusses the intended audience and the expectation that readers have a ‘fundamental understanding’ of computers. The explanations for different bits of technology were so vague and so high level that they must be meant for those with no understanding of computers, not a ‘fundamental’ one.  I found the partial sector data recovery and the encryption examples to be especially drawn out and hand wavy, and advise anyone technically inclined to skip those sections.

      “You cannot trust closed-source crypto”  on page 87 is probably the boldest line in the entire book. The author is careful to present balanced arguments around open source versus closed source forensic software suites, the possession of disk wiping tools, etc. So it was a bit startling to see him take an unequivocal stance around something that is still controversial in public discourse.

      Chapter 6 on anti-forensics is most likely to appeal to those in tech with a pen-ten or tinkering bent, and it was my favorite chapter. I was surprised to learn that Steganography is still one of the most effective ways to foil forensics; even if the examiners know to look for it in the first place. I remember learning about it while researching Benedict Arnold in high school and doing an in class demo of the spy’s technique. I had thought it largely obsolete and I was wrong.

      I had some smaller gripes with the book. I didn’t love the windows exclusive focus, although I understand the stated rationale. Still, I expected some mention of common software found on servers and networking equipment in Chapter 9 on network forensics. I also thought the treatment of IRC was unbalanced. IRC is used by many developers working at tech companies, universities, and on open source projects, but only its use by criminal elements was noted. None of its legitimate uses were mentioned.

      I also wish the author had discussed the economics of the field more directly. He mentions various costs when discussing trade-offs for accreditation, for multiple tooling suites, and for physical supplies. He never directly discusses income. Reading between the lines around the emphasis on court admissibility that reoccurs throughout the book, I infer that the justice department and lawyers are the main clients of digital forensics. I think a direct discussion of that and its impact on the field would have been very useful.  

      The author repeats warnings and concerns about the gap between the speed of technology and court-admissible forensics tools throughout the book but avoids discussion of how to fix the problem. It’s not possible to discuss how to fix that widening gap without discussing the the influence and responsibility of the economic engine behind the field.

On Presentation:

      Writing was clear, concise and easy to follow. The chapter arrangement and section arrangement within each chapter flows smoothly. My only presentation nit is that the font size for the interviews in the book is much smaller than the paragraph text; it lost readability.

Searching for stuff, either in your own history or on the web, is an integral part of the browsing process.  The UI for searching is important to a great user experience. For us, that UI is part of what we call the Awesomescreen. Firefox Desktop has an Awesome bar, we have an Awesomescreen. Look for these changes in Firefox 44.

New Search Suggestions from Your Search History

Previously, if you opt into search suggestions, we only showed you suggestions from your default search engine (if your default search engine supports it). Now we can also show you things you’ve searched for in the past on your mobile device.  These suggestions from your search history are denoted by a clock icon. Anthony Lam designed the shiny new UI you’ll see below.

We’ve also updated the layout and styling of the search suggestion ‘buttons’ to help you spot the one you want faster and make the whole process easier on your eyes. 😉

We believe in user choice and control, so this is controlled by a separate setting.  If you don’t like, you can just turn it off without losing suggestions from your preferred search engine. Simply go to Settings > Customize > Search > “Show search history” and uncheck the checkbox. Likewise, if you dislike suggestions from search engines, you can disable that without losing suggestions from your own history. For that, go to Settings > Customize > Search > “Show search suggestions” and uncheck that box.

fennec-savedsearches_bolding

New saved searches on phone

 

fennec-savedsearches_bolding-tablet

New saved search suggestions on tablet

Updated Permissions Prompt

We’ve update the prompt to be feel more modern and integrated with the browser. You may have noticed that the doorhanger prompts recently got a makeover, courtesy of my colleague Chenxia.

fennec-savedsearches-new-permissions-prompt-phone

New permissions prompt for opting into search suggestions

Engineer goes off and adjusts the plan. At a high level,it looks like this:

    Client side code

        Set up

            snippet lets users know there is a new options in preferences

            if the app locale does map to a supported language, the pref panel is greyed out with a message that their locale is not supported by the EU service

            if not, user clicks ToS, privacy policy checkbox, confirms language

            app contacts server

            if server not available, throw up offline error

            if server available, upload device id, language, url list

            server sends back the guid assigned to this device id

            notify user setup is complete

            enable  upload service

        When new tab is opened or refreshed

            send msg to server with guid + url

        Turning off feature

            prompt user ‘are you sure’ & confirm

            notify server of deletion

            delete local translated pages

Server side code

    Set up

        poked by client,

        generate guid

        insert into high risk table: guid+device id

adds rows for tabs list (med table)

adds rows for the urls (low table)

    Periodic background translation job:

        finds urls of rows where the translated blob is missing

        for each url, submits untranslated blob to EU’s service

        sticks the resulting translated text blob back into the table

    Periodic background deletion jobs:

        finds rows older than 2 days and evicts them in low risk table & medium risk tables

        find rows in sensitive table older than 90 days and evict.

secure destruction used.

        user triggered deletion

            delete from sensitive table. secure destruction

            delete from medium table

    Database layout

        sensitive data/high risk table columns: user guid, device id

            maps guid to device id

        medium risk table columns: user guid, url

            maps guid to tabs list

        low risk table columns: url, timestamp, language, blob of translated text

            maps urls to translated text

The 4th Meeting

Engineer: Hey, so what do you guys think of the new plan? The feedback on the mailing list was pretty positive. People seem pretty excited about the upcoming feature.

Engineering Manager: indeed.

DBA: much better.

Operations Engineer: I agree. I’ll see about getting you a server in stage to start testing the new plan on.

Engineer: cool, thanks!

Engineer: Privacy Rep, one of the questions that came up on the mailing list was about research access to the data. Some phd students at the Sorbonne want to study the language data.

Privacy Rep: Did they say which bits of data they might be interested in?

Engineer: the most popular pages and languages they were translated into. I think it would really be just the low risk table to start.

Privacy Rep: I think that’d be fine, there’s no personal data in that table. Make we send them the basic disclosure & good behavior form.

Engineering Manager: A question also came to me about exporting data. I don’t think we have anything like that right now.

Engineer: No, we don’t.

Privacy Rep: well, can we slate that do after we get the 1.0 landed?

Engineering Manager: sounds like a good thing to work on while it’s baking on the alpha-beta channels.

Who brought up user data safety & privacy concerns in this conversation?

Engineer, Engineering Manager, & Privacy Rep.

Engineer goes off and creates the initial plan & sends it around. At a high level,it looks like this:

    Client side code

        Set up

            snippet lets users know there is a new options in preferences

            if the app language pref  does not map to a supported language, the pref panel is greyed out with a message that their language is not supported by the EU service

            if it does, user clicks ToS, privacy policy checkbox, confirms language

            app contacts server

            if server not available, throw up offline error

            if server available, upload device id, language, url list

            notify user setup is complete

            enable  upload service

        When new tab is opened or refreshed

            using device id, send a sql query for matching row of url+device id

 

Server side code

    Set up

        poked by client, adds rows to table, one for each url (tab)

    Periodic background translation job:

        finds urls of rows where the translated blob is missing

        for each url, submits untranslated blob to EU’s service

        sticks the resulting translated text blob back into the table

    Periodic background deletion job:

        finds rows older than 2 days and evicts them

        unless that is the last row for a given device id, in which case hold for 90 days

    Database layout

        table columns: device id, url, timestamp, language, untranslated text, translated text

 

 The 3rd Meeting:

Engineer: Did y’all have a chance to look over the plan I sent out? I haven’t received any email feedback yet.

DBA: I have a lot of concerns about the database table design. The double text blobs are going to take up a lot of space and make the server very slow for everyone.

Operations Engineer: ..and the device id really should not be in the main table. It should be in a separate database.

Engineer: why? That’s needless complexity!

Operations Engineer: Because it should be stored in a more secure part of the data center. Different parts of the data center have different physical & software security characteristics.

Engineer: “ oh”

Engineering Manager: You should design your interactions with the server to encapsulate where the data comes from inside the server. Just query the server for what you want and let it handle finding it. Building the assumption that it all comes from one table into your code is bad practice.

Engineer: ok

DBA: The double blobs of text is still a problem.

Engineer: Ok, well, let’s get rid of the untranslated text blob. I’ll take the url and load the page and acquire the text right before submitting it to the EU service. How does that sound?

Engineering Manager: I like that better. Also allows you to get newest content. Web pages do update.

Privacy Rep: I was wondering what your plans are for encryption? I didn’t see anything in your draft after it either way.

Engineer: If it’s on our server and we’re going to transmit it to an outside service anyway, why would we need encryption?

Privacy Officer: We’ll still have a slew of device ids stored. At the very least we’ll need to encrypt those. Risk mitigation.

Engineer: But they’re our servers!

Operations Engineer: Until someone else breaks in. We’ve agreed that the device ids will go in a separate secure table, so it makes sense to encrypt them.

Engineer: fine.

Privacy Officer: The good news is that if you design the url/text table well, I don’t think we’ll need to encrypt that.

Engineer: That’s your idea of good news?

Engineering Manager: Privacy Officer is here to help. We’re all here to help.

Engineer: Right, Sorry Privacy Officer.

Privacy Rep: It’s ok. Doing things right is often harder.

Privacy Officer: Anyway, I wanted to let you all know that I’ve started talking with Legal about the terms of service and privacy policy. They promise to get to it in detail next month. So, as long as we stay on the stage servers, it won’t block us.

Engineering Manager: thanks. Do you have any idea how long it might take them once they start?

Privacy Officer: Usually a couple weeks. Legal is very, very through. I think it’s a job requirement in that department.

Engineer: Fair enough.

Operations Engineer: Speaking of legal & privacy, do you/they care about the server logs?

Privacy Officer: I don’t think so. The device id is the most sensitive bit of information we’re handling here and that shouldn’t appear in the logs right? I’ll still talk it over with the other privacy folks in case there’s a timestamp to ip address identification problem.

Operations Engineer: That’s correct.

Engineering Manager: Always good to double check.

Privacy Officer: By the way, what happens to a user’s data when they turn the feature off?

Engineer: … I don’t know. Product Manager didn’t bring it up. I don’t have a mock from him about what should happen there.

Privacy Officer: we should probably figure that out.

Engineer: yeah

Engineering Manager: I look forward to seeing the next draft. I think after you’ve applied the changes & feedback here you should send it out to the public engineering feature list and get some broader feedback.

Engineer: will do. Thanks everyone

Who brought up user data safety & privacy concerns in this conversation?

DBA, Engineering Manager, Privacy Rep.

The day after the first meeting…

Engineering Manager: Welcome DBA, Operations Engineer, and Privacy Officer. Did you all get a chance to look over the project wiki? What do you think?

Operations Engineer: I did.

DBA: Yup, and I have some questions.

Privacy Officer: Sounds really cool, as long as we’re careful.

Engineer: We’re always careful!

DBA: There are a lot of pages on the web, Keeping that much data is going to be expensive. I didn’t see anything on the wiki about evicting entries and for a table that big, we’ll need to do that regularly.

Privacy Officer: Also, when will we delete the device ids? Those are like a fingerprint for someone’s phone, so keeping them around longer than absolutely necessary increases risk for the user & the company’s risk.

Operations Engineer: The less we keep around, the less it costs to maintain.

Engineer: We know that most mobile users have only 1-3 pages open at any given time and we estimate no more than 50,000 users will be eligible for the service.

DBA: Well that does suggest a manageable load, but that doesn’t answer my question.

Engineer: Want to say if a page hasn’t been accessed in 48 hours we evict it from the server? And we can tune that knob as necessary?

Operations Engineer: As long as I can tune it in prod if something goes haywire.

Privacy Officer:: And device ids?

Engineer: Apply the same rule to them?

Engineering Manager: 48 hours would be too short. Not everyone uses their mobile browser every day. I’d be more comfortable with 90 days to start.

DBA: I imagine you’d want secure destruction for the ids.

Privacy Officer:: You got it!

DBA: what about the backup tapes? We back up the dbs regularly?

Privacy Officer:: are the backups online?

DBA: No, like I said, they’re on tape. Someone has to physically run ‘em through a machine. You’d need physical access to the backup storage facility.

Privacy Officer:: Then it’s probably fine if we don’t delete from the tapes.

Operations Engineer: What is the current timeline?

Engineer: End of the quarter, 8 weeks or so.

Operations Engineer: We’re under water right now, so it might be tight getting the hardware in & set up. New hardware orders usually take 6 weeks to arrive. I can’t promise the hardware will be ready in time.

Engineering Manager: We understand, please do your best and if we have to, Product Manager won’t be happy, but we’ll delay the feature if we need to.

Privacy Officer:: Who’s going to be responsible for the data on the stage & production servers?

Engineering Manager: Product Manager has final say.

DBA: thanks. good to know!

Engineer: I’ll draw up a plan  and send it around for feedback tomorrow.

 

Who brought up user data safety & privacy concerns in this conversation?

Privacy Officer is obvious. The DBA & Operations Engineer also raised privacy concerns.

The 1st Meeting

Product Manager: People, this could be a game changer! Think of the paid content we could open up to non-english speakers in those markets. How fast can we get it into trunk?

Engineer: First we have to figure out what *it* is.

Product Manager: I want users to be able to click a button in the main dropdown menu and translate all their text.

Engineering Manager: Shouldn’t we verify with the user which language they want? Many in the EU speak multiple languages. Also do we want translation per page?

Product Manager:  Worry about translation per page later. Yeah, verify with user is fine as long as we only do it once.

Engineering Manager: It doesn’t quite work like that. If you want translation per page later, we’ll need to architect this so it can support that in the future.

Product Manager: …Fine

Engineer: What about pages that fail translation? What would we display in that case?

Product Manager: Throw an error bar at the top and show the original page. That’ll cover languages the service can’t handle too. Use the standard error template from UX.

Engineering Manager: What device actually does the translation? The phone?

Product Manager: No, make the server do it, bandwidth on phones is precious and spotty. When they start up the phone next, it should download the content already translated to our app.

Engineer: Ok, well if there’s a server involved, we need to talk to the Ops folks.

Engineering Manager: and the DBAs. We’ll also need to find who is the expert on user data handling. We could be handling a lot of that before this is out.

Project Manager: Next UI release is in 6 weeks. I’ll see about scheduling some time with Ops and the database team.

Product Manager: Can you guys pull it off?

Engineer: Depends on the server folks’ schedule.

Who brought up user data safety & privacy concerns in this conversation?

The Engineering Manager.

Introduction

In January, I laid out information in a presentation & blog post information for a discussion about applying Mozilla’s privacy principles in practice to engineering.  Several fellow engineers wanted to see it applied in a concrete example, complaining that the material presented was too abstract to be actionable. This  is a fictional series of conversations around the concrete development of a fictional mobile app feature. Designing and building software is a process of evolving and refining ideas, and this example is designed for engineers to understand actionable privacy and data safety concerns can and should be a part of the development process.

Disclaimer

The example is fictional. Any resemblance to any real or imagined feature, product, service, or person is purely accidental. Some technical statements to flesh out the fictional dialogues. They are assumed to only apply to this fictional feature of a fictional mobile application. The architecture might not be production-quality. Don’t get too hung up on it, it’s a fictional teaching example.

Thank You!

    Before I begin, a big thank you to Stacy Martin, Alina Hua, Dietrich Ayala, Matt Brubeck, Mark Finkle, Joe Stevenson, and Sheeri Cabral for their input on this series of posts.

The Cast of Characters

so fictional they don’t even get real names

  1. Engineer
  2. Engineering Manager
  3. Service Operations Engineer
  4. Database Administrator (DBA)
  5. Project Manager
  6. Product Manager
  7. Privacy Officer, Legal’s Privacy Auditor, Privacy & Security there are many names & different positions here
  8. UX Designer

Fictional Problem Setup

Imagine that the EU provides a free service to all residents that will translate English text to one of the EU’s supported languages. The service requires the target language and the device Id. It is however, rather slow.

For the purposes of this fictional example, the device id is a hard coded number on each computer, tablet, or phone. It is globally unique and unchangeable, and so highly identifiable.

A mobile application team wants to use this service to offer translation in page (a much desired feature non-English speakers) to EU residents using their mobile app.  For non-english readers, the ability to read the app’s content in their own language is a highly desired feature.

After some prototyping & investigation, they determine that the very slow speed of the translation service adversely affects usability. They’d still like to use it, so they decide to evolve the feature. They’d also like to translate open content while the device is offline so the translated content comes up quicker when the user reopens the app.

Every New Feature Starts Somewhere

Engineer sees announcement in tech press about the EU’s new service and its noble goal of overcoming language barriers on the web for its citizens. She sends an email to her team’s public mailing list “wouldn’t it be cool apply this to our content for users instead of them having to copy/paste blocks of text into an edit box? We have access to those values on the phone already”

Engineering Team, Engineering Manager & Product Manager on the thread are enthusiastic about the idea.  Engineering Manager assigns Engineer to make it happen.

 

She schedules the initial meeting to figure out what the heck that actually means and nail down a specification.

Books reviews are entirely my opinion and I am not editor. Please take them with a pound of salt. If you are looking for an in depth review, this is not it.

Introduction

‘American Lion’ focuses on President Andrew Jackson’s years in the White House, though it does cover his life start to finish. I became interested in the book after the author’s lively interview on the Daily Show. Then I forgot about it until I saw it lying on a friend’s shelf. He graciously lent me the book.

On Content:

The most fascinating part of the book dealt with the South Carolina Nullification crisis during Jackson’s time in office. The crisis was adverted, but the legal standing of a state’s right to nullify federal laws was not resolved. The Nullification crisis & its roots laid much of the legal groundwork for the American Civil War.  The South made many arguments about the ‘intents of the Framers’, to support their position, much like modern American politicians do today.

Monroe, a Founding Father (a Framer, former President, author of the ‘Virginia Resolutions’ cited in legal support of Nullification)  was alive and vocal that the ‘Virgina Resolutions’ did not extend to nullification and nullification of federal laws was not intended by the Constitution. He was roundly ignored by the South.

I found I learned more about the politics swirling about in the early days of America than about Jackson himself. I had heard of the Eaton affair in AP American History class, but was unaware of its impact on national politics. It caused Jackson to expel his niece for awhile from the position of White House Hostess (now understood the be the duties of the First Lady), determined who would be Jackson’s party’s successor(the next President), and even led one Cabinet minster to attempt to murder another.

On Style & Presentation:

I was expecting a lively and engaging narrative style was severely disappointed in that regard.  Large swatches of the book are exceedingly dry, and not good about conveying why I should care about its current topic. Some sections were so disjointed that I lost the narrative thread entirely. However, there were a couple chapters that I could not put down.

I also felt he told me about Jackson’s character more than demonstrating Jackson’s character. There are some vignettes to support his opinion of Jackson’s personality. They occurred much later in the book, after I had become annoyed at the overuse of adjectives and under use of examples. I wish he had held off his view of Jackson’s  until the supporting narrative had a chance to appear.

I also wish it had come with a family tree of Jackson’s relations. The many similar names often made it hard to follow who was whom, especially when citing family sources. Which Andrew wrote this particular quote? The man himself, his adopted son, his nephew, or one of his more distant relations?

Bottom line:

  1. Did I learn something?
    1. Yes, about the legal origins of the American Civil War.
    2. I did not learn as much about Jackson’s inner workings as I expected.
  2. Did I enjoy the time spent reading it?
    1. I would say I enjoyed about a third of the book. My friend did not bother finishing it.
  3. Would I recommend it?
    1. To someone trying to understand the evolution of the American presidency and capable of skimming, yes.
    2. To anyone else, no.