Sunday, November 24, 2019

Are you the valet?

Early November I wanted to extract the data I had on my very first TracFone Android device, a ZTE Valet.  It turns out the ZTE Valet is easily temp rooted.  Search for RootMyValet and a github result is the starting point.  Buried under the github lib directory (IIRC) is the roothandler binary.  I mention this because I spent days looking for the cited roothandler binary in su packages as described by the README.

All along I was misunderstanding the README.   The README states to locate my favorite su package, and extract the su binary.  It then instructs to copy the roothandler file in to the same /data/local/tmp directory as where the su binary is dropped, which gave me the impression roothandler is bundled with my favorite su package.  But enter another issue, the Valet is Android 4.1 and all the recent su binaries I extracted required PIE (not to be confused with Android P/pie). 

Eventually I found an old enough su without PIE, but not before taking an alternate path with KingRoot and discovering KingRoot not only failed to root, but successfully installed a bunch of shims to phone home statistics to qq.com and yahoo, and fails to offer the ability to uninstall.

Now I had two problems.  The mostly well mannered "malware" KingRoot -- and -- I still did not have a backup.

First off, the native Android ADB backup didn't produce anything but 504 byte sized file and Move to IOS crashed.  I tried another backup program from the Play Store, but it too, crashed. How bad is KingRoot?  After setting up a hostapd + dnsmasq + tcpdump, I didn't see anything other than small packets headed toward qq and yahoo, but that doesn't mean it didn't lift some data during the initial installation.  I'll save that for some later point in time.

As far as the RootMyValet presence on github, I gather that the RootMyValet.apk is purely a UI to invoke roothandler easily, and the github repo is the wrapper around the whole kit, which includes the apk for the optional UI invocation.

After crawling through the source code and finding a reference to a getroot function, I figured the github repo was an 'apk' that I am supposed to build myself. With that all cleared up and in possession of the roothandler binary, I prepared my own version of the script that roothandler executes.

In my own script, I had it chdir to /data and run 'busybox find | busybox cpio > /sdcard/data.cpio".  This is a simplified version of the actual command as I don't recall the precise flags that the cpio plugin of this busybox required, but it suffices to say it would have been  similar to "ov -H crc" or "ov -H newc".

If you're struggling to follow me up to this point, I used the platform-tools ADB binary, which allowed me to get a shell.  In that shell I was able to invoke roothandler and roothandler, in turn, invokes the shell script (as described in the README) to run as root anything you require during the temp root.

Happy, I got my backup successfully migrated my OTP and organization's Duo multi-factor to different devices.

Update
I dug in a bit more in to the behavior of KingRoot on a Valet.  It does work on the Valet, provided your /data volume is not encrypted.  If you encrypted your Valet using the Android setting, KingRoot will not succeed, but it will install the hooks to ship telemetry.

After it succeeds on an unencrypted Valet, and after installing SuperSU, invoke adb shell and type su. 

In this repeat attempt, I took these steps.
1. install KingRoot and launch.
2. Try Root - it succeeds.
3. install SuperSU
4. Grant SuperSU permission in KingRoot
5. adb shell, and try su, but it failed with Segmentation fault.
6. reboot the phone
7. run KingRoot again, and Try Root
8. adb shell, and try su, this time it worked.

Making the switch from IOS to Android - an iPhone Notes rescue story

In August 2019 I picked up a Pixel XL to replace my soon to be unsupported iPhone 6. I mostly use a phone for jotting down notes, texting, satisfying my curiosity on wikipedia, reading Science Daily, capturing interesting photo moments, navigation, and the weekly phone call. Almost any modern phone satisfies my use case, aside from one requirement: it needs to receive regular security updates.

The Pixel XL received test versions of Android 10, which implied it would receive a full edition. As long as I can tolerate the differences, Android 10

S&D iPhone 7's on a particular Amazon-owned deal site were above $360 and a open box new Pixel XL listed for $210.  The Pixel seemed to be a cost effective way to meet my needs.

I ordered a battery case, which is normal for any phone I pack around. Wearing the battery case, it's not huge -- it's annoyingly "Phabulous," yet I am committed to see this through.

To get my content migrated, I used the Pixel's migration assistant with the Pixel crossover dongle and the lightning cable.  It moved photos, text messages, call history, and a fairly impressive "vis-a-vis" of apps; however, app data for those it located in the Play Store, didn't make the transfer.  For instance, my TracFone app had to set up all over again.  Office Lens lacked any of the associated captures, but that did not bother me.

To my chagrin, my images sent and received via texts were not there.  Add to that, my iPhone notes had no corresponding app to receive them.  Notes were not there.

With Apple's iCloud service, you might be thinking "why is this an issue?  You can access your notes from iCloud."  I could, yes, but my data size is over the free 5GB, and I already have a OneDrive subscription and for reasons, funding iCloud instead of OneDrive is not presently an option.

Linux is all I use, and despite what others might classify as limitations, it has certain applications which make deviation from the trodden path, possible.

Committed to retiring the iPhone 6 and wanting to keep my health trend data from a Viatom app, I tried idevicebackup2. It works and produces an (optionally) unencrypted backup.  The result is a big tree of hex numbered directories under which it stores some number of hexadecimal GUIDish files under the folder tree.  Using

find . -type f -exec file {} \;

is informative and reveals the file type. Common types are Apple binary data, ASCII text, an SQLite database, a JPEG, and PNG. From this information I assembled a pipeline to include a grep for image files.  With image files, I figured I could begin to determine the app-to-image relationship and locate the apps and their corresponding files, like the database or ASCII text, or for that matter, my iPhone Notes.

I opened the Notes app on my iPhone and located a message with a distinct word that is unlikely to be found anywhere in the backup tree.  I used

grep -r "Lincoln" ./*

where the current directory (. and thus the . in ./*) is the parent directory I specified in the idevicebackup2 command line.

I found the file and copied it to a scratch space.  I used the 'file' command on it and it revealed an SQLite DB. 

I installed sqlitebrowser and opened the file.  I found that the note zdata was compressed and a simple online search yielded the syntax I needed to extract the data.  I arrived at this script which extracted everything that I cared about: the text.  I had two notes with attachments.  Upon reviewing them in my iPhone, I still put no value on keeping them.  According to the DB file, I had 473 notes.

This is the command I used. I hope it can be of help to others!

bash% COUNTER=1; while [ $COUNTER -lt 474 ]; do ZCOUNT=${COUNTER}; [ $ZCOUNT -lt 100 ] && ZCOUNT="0${ZCOUNT}"; [ $COUNTER -lt 10 ] && ZCOUNT="00${ZCOUNT}"; sqlite3 ./notes.sql3 "SELECT writefile('/home/mk/Documents/object${ZCOUNT}.gz', zdata) FROM ZICNOTEDATA WHERE z_pk = ${COUNTER};"; zcat /home/mk/Documents/object${ZCOUNT}.gz | strings > /home/mk/Documents/note-${ZCOUNT}.txt; COUNTER=$((COUNTER+1)); done


Wednesday, October 2, 2019

More about the TPM-iness

Last week while fumbling around through the source code for tpm_managerd, I noticed some mock units.  I got the crazy idea, what if those mock units could chat back to the TPM client and make it all happy; except that sounds like a good idea until it isn't.  Clearly the TPM needs to store key details.  If I sufficiently understand the TPM's role, and if cryptohomed thinks it has stored the key that it will need next boot, but said key did not get stored, thanks for the exercise in cryptoshredding.

The chromefy / croissant team has done a great job of making swtpm available. Kudos to croissant for helping support Chrome on devices absent a TPM2.  I have previously ranted why, personally, I do not condone third party binaries like what croissant delivers, but that set aside, providing future chrome users a way to build their own and overlay it atop an integral filesystem sounds plausible, but only thanks to the croissant team paving the way.

Admist my searching for the steps that croissant uses to build a TPM capable image, I found chromefy's steps to leverage the older Chromebook's TPM components.  Then...lightbulb comes on. 

Why not script-o-matically download two images to install an official EFI Chrome build.  At least until swtpm is absolutely required?
1. Download banjo and park under /tmp/source/TPM
2. Download release with TPM2 requirement and park under /tmp/source/ROOT-A
3. Good idea or bad? Take partition #7 (ROOT-C), resize, and store a overlay - a ext4 rootfs if you wanted.
4. Use futility to extract the EFI vmlinuz and update partition #12
5. If an initramfs can't be exposed (tested CPIO and it didn't work), edit grub to mount partition #5 - yep, no difference from croissant at this point.
6. Resize STATE and format. 
7. Create and store a remerge script on partition #7 which basically does all of the above so updates* can be incorporated by downloading the latest TPM1 available and merge with the new update on partition #5.

* - With a FALCO I updated to the latest August 29, 2019 build.  Falco was released September 29, 2013 and despite the five year life-cycle for Chrome devices, it received an update.  What does this mean in respect to the warning I received that my BANJO installation wouldn't receive any future updates?  At this point more investigate must happen.

Ah, probably all crazy speak.

Sunday, September 22, 2019

The pretender: Fooling 3dmark SkyDiver to test your card

This is a quick post about a funny discovery I made when testing a GTX 460 I received second-hand.  I also received a GTX 550 knowing that it needed a GPU fan replaced, but, in a PC built from the usual modular pieces, I put in both cards just to see if the drivers would squawk about the SLI cable I affixed between them.  It did not, but it neither activated SLI.  I ran the Sky Diver benchmark and the GTX 460 completed.  Power off. I slapped on a fan to the GTX 550 which required removal of the GTX 460 due to the unsightly tape-fastened fan and it benched slower than the GTX 460. Power off. Swapped in GTX 460 in place of the GTX 550, and launched the benchmark.  But the benchmark stated I needed a capable video card.  It reported the GTX 460 as the video card installed in the system.  Power off.  Removed the ad-hoc fan from the GTX 550 and installed it in the slot neighboring the GTX 460.  Power on with both cards. Run benchmark.

My takeaway: you stand a chance of benching other DirectX 12 cards if 3dmark's Sky Diver detects you have at least one card capable of running the benchmark.  

This does make me wonder how awful a GT 630 performs on Sky Diver. The passmark video score for a GTX 460 is around 2,600 and the GT 630 - later generation but really tiny - was below 900.

Saturday, September 14, 2019

Good eve-ning

In my last post,  I stated that I was all done installing Chrome OS on ordinary PC laptops.  For the next month, I have changed my mind. The same evening as writing this post, I gave one last visit to my Chrome installations, and accidentally discovered a fix to a sign-in problem I had with Eve on my devices.  Almost all of my devices on which I can sandbox Chrome OS are absent a TPM 2.0 module, which I believed was the reason why I could never complete sign-in on a fresh install.

The discovery happened while upgrading an installation of Glimmer.  Glimmer successfully updated from 68 to 76. By grabbing the PARTUUID and a little magic with sed, verity booted with vmlinuz extracted from /dev/sda4.  Skip over the next point. I included it only to jog my own memory.


  • The actual image resided on an SD-CARD on which I had installed Glimmer as a way to squeeze some extra life out of a ASUS Chromebook with a crippled eMMC.  Although updates would attempt to install, it reported that no updates were available solely because on the ThinkPad Yoga 11e (20de) the internal disk drive didn't have /dev/sda4 and higher.  Apparently a check is made before downloading, and part of the check validates there's a partition to accept the update.


After the update, I had a new message stating that this was the last release of Chrome for my device.  It also meant that no further updates were allowed. That triggered my interest to follow in the footsteps of Project Croissant and see if Eve would work as far back as a BayTrail CPU.

With this current update of Glimmer, I had no idea where the system recorded the flag that no further updates were allowed.  Even by advertising it as an Eve, no new updates were available.  I rolled back to 68, which invalidated my stateful partition - not exactly unexpected - but after getting to a console,  I redeployed Eve to a resized /dev/sda5.  This redeploy involved taking advantage of /dev/zram0 to hold the expanded contents of the Eve .zip  and losetup -P to expose the partitions.  Then a dd of /dev/loop1p3 to /dev/sda5, capturing /dev/loop1p2's, /dev/loop1p4's vmlinuz and kernel configs, and finally the /dev/loop1p12 EFI partition contents.

Eve booted with vmlinuz.A from its EFI partition, but didn't with the /dev/loop1p2 derived vmlinuz.  I realized that Eve has two different kernels in the dl-edge recovery image. The /dev/loop1p2 derived image was larger than vmlinuz.A.  /dev/loop1p2 also has cros_recovery in the kernel config.  This suggested I ought to use kernel derived from /dev/loop1p4.  And, so, I discovered that the /dev/loop1p4 derived kernel worked with verity.

I expected wireless, bluetooth and sound to be asbent.  eve-release/R76-12239.92.0 doesn't come with firmware for the Intel 7260-AC, and it doesn't load support for the sound chipset in my device, but modprobe snd-hda-intel solves it.  I learned that with verity turned on, I must pass lsm.module_locking=0 at boot.  I then used the same technique I previously wrote up for my Dell Latitude 5175 to get wireless working on the ThinkPad Yoga 11e.  Surprisingly, I was not presented with a fresh Chromebook setup dialog; however, the installation continued to report that it was the last update for my system.

This was good and bad news. The good news is that I was able to get Eve working with my BayTrail. That bad news is that the stateful partition would have to be powerwashed, and I would almost certainly be unable to complete initial setup sign-in.

The normal symptom from a newer ChromeOS on a BayTrail/Haswell has been a BIOS boot loop, the screen fades to white, or I can't complete initial setup sign-in.

To rid myself of the update flag, I would have to powerwash.  After powerwashing, instead of the Chrome splash, I experienced a boot loop in to the all to familiar recovery dialog.  Instead of counting up to 5 minutes, it rapidly rebooted barely giving time black & white to see the recovery screen.

I booted a live CD and formatted the stateful partition of the internal disk to ext4 with a volume name of STATE.  I rebooted back to Eve, and sign-in failed, as expected.  Sign-in would get just beyond MFA, and then spin for about 5 minutes before reporting that sign-in failed.

I repeated the sign-in process and the system became unresponsive. I then used CTRL-ALT-F2 and signed in to the text console as root.  There was a delay of several seconds before the text console came active. I ran the command top, and observed that flashrom was using a fair amount of CPU.  I killed off flashrom, and a new flashroom process appeared to have spawned in its place.  I killed it again, and top finally indicated the CPU was nearly idle.  I returned to the Chrome interface and I was now prompted with settings for my account.

This can many any number of things, but here are two possibilities:


  1. Eve now ignores TPM as a requirement and retry signing in will now complete first-time setup of a Chromebook.
  2. Eve tries to flash updates before permitting setup (sign in) of a new Eve Chromebook, and if that process times out, so does signing in.  Killing off the flash updates might be accelerating the sign in process.
  3. Dumb luck and timing. EDIT: 09-22-2019. I powered up the laptop and couldn't get signed in. I this happened on the 09-12-2019, but happened again. Although I had previously formatted STATE ext4, I found hints of legacy glimmer configuration under /mnt/stateful_partition/encrypted. Eve somehow snagged an update_engine config directive files and what looks like cache content, to which end removing them resolved the no more updates available issue, but after a restart, I could not get signed in.  Perhaps a previous accidental partial boot to glimmer corrupted STATE, or perhaps the purge of the glimmer containing data did me in.  I repeated the wipe of STATE.  And just like the issue on 09-12, I was unable to get signed in.  For what it is worth, I proved that even when Eve is allowing me t sign in, I can't get signed in when offline.  If I close the lid on an active session, I can resume.  If I power off, to get logged in next time, I have to be online.  Therefore, removing tpm_managerd isn't helping. Neither was killing flashrom on setup, other than, flashrom did appear to be a long running process during initial eve setup. Takeaway: It looks like that in addition to putting a layer over /lib/firmware, I'll have to put a layer over the place where swtpm should be deployed.   To keep with my desire to only use code from trusted sources, I will need to automated building my own swtpm.  I see Crossiant distributes the source code in the same spot as the .tar archive, yet the paranoid side of me thinks that binaries which process and contain sensitive user credentials are a good target for bad actors.  Building this myself is not the result I would like to have with eve; however, I feel it is better than building my own ROOT-A -B or -C, with or with other's projects, and then have to learn how to lock it back up with verity.  Or worse, disable verity.

Here's a strange behavior on my ThinkPad Yoga 11e running Eve:

  • Pressing SPACE-n-m produces a beep. EDIT: 09-16-2019 this actually appears to be a key-jamming indicator.  B-N-M also does it, but J-K-L does not.  H-J-K-L also beeps.

Friday, September 13, 2019

Polishing some Chrome

This is a short recap on my lack of progress over the last few months in making stock Chrome a solid platform for compatible hardware. I halted work on it when I found myself unable capture the kernel that most Chrome builds deploy to the KERN-A or KERN-B partition after successfully updating the ROOT-A or ROOT-B partition. 

By accident, I stumbled on to the very tool I needed in a bit of code for SuperSU.  That tool is futility.  The command line syntax of futility is self-explanatory.  After ChromeOS reports that you are up to date, use futility to extract the vmlinuz. The next dialog requires you have cros_debug passed to the kernel at boot and that a shell (CTRL-ALT-T) grants you root. UEFI boot is assumed in this example.  The EFI partition is normally partition #12 (/dev/sda12) and for my example I have mounted it under /tmp/sda12.

futility vbutil_kernel --get-vmlinuz /dev/sda4 --vmlinuz-out /tmp/sda12/syslinux/vmlinuz.B

The futility binary can also yank out the command line for the kernel, albeit having unresolved variables, for the kernel.  With a little comparison to existing entries in the grub configuration, it seems pretty straight forward that by fetching the PARTUUID for the newly updated partition and combining that with the remainder of the verity syntax, verity can be enabled.

In the small ground made moments before writing this up, I cut a corner by specifying root=/dev/sda5, but still cheered loudly when Chrome booted and reflected the upgrade from 68 to 76. 

Despite the small victory and the ease it would seem there is in automating updates with less than 10 commands, it is unlikely I will contribute any future updates on this blog.

While searching for answers to some of my own Chrome on PC laptops, I discovered that alternatives are bountiful. With several alternative sources for a Chrome hybrid installation available, FydeOS, AtB, Chromefy, any other write-up should be provided only because it has something the others lack and something and audience connects with. The alternative offerings keep improving and simplifying the update process, which in essence achieves what I finally scored today with futility, even if it does not have a parallel to my desire to preserve factory image integrity.

Furthermore, now that TPM's have become a standard part of Chromebooks, I suspect my fascination with making an uber Chromebook from an i3 or higher Windows or Linux laptop will fade, simply due to the challenge and implication of neutering kernel-based TPM security.

As a reminder, my objective was to provide a deployment method using only stock Chrome image, with Play Store support where factory offered, and result in a system successfully applies updates.  Now that this is complete, I am moving on.




Sunday, January 20, 2019

Epple, let's see!

If I happened to be in a cafe and if my ears latched on to any muttering of "Disk II," I suspect I would be unable to resist eavesdropping and attempting to penetrate the conversation.  I exhausted a tremendous portion of my childhood from 1986 until 1991, ensuring I could make a backup of every Apple II program I encountered.  Even the crappy ones like Shamus and Frogger.  I lacked any electrical theory foundation, lacked any oscopes or meters, and could barely program my own rudimentary Disk II machine code; there was no way I could create a 6&2 decoder, but fetching raw data wasn't impossible.

In a pursuit to reclaim my own fatiguing memory of the Disk II system and behavior of said games, I purchased (the absolutely fantastic) AppleSauce floppy drive controller.  


  • I refrain from using the designer's full name or storefront to avoid appearing on the first page of Internet search engines for would-be seekers of AppleSauce or other Disk II-ish topics.  As for AppleSauce, this isn't said enough: John thanks and thanks.


The AppleSauce controller has a continuously-developed software counterpart that smartly rips a flux image of the floppy media, and through its Disk Editor, permits exporting to the AppleSauce-driven WOZ1 and WOZ2 formats.  

Emulators are adopting mature WOZ support at an impressive rate.  The AppleSauce storefront details Apple II emulators with WOZ support, and like AppleSauce, many emulators continuously change.  Some need to be built from source.

I found that one emulator which does not require to be built, but does require OS X -- Gerard's fantastic work -- and sadly, for me, it does not work with the WOZ of Frogger captured by fouram.  I had a master Frogger disk in my former library, which worked on my enhanced //e back then. I get the impression it should work but the booted Frogger WOZ1 never gets too far beyond the Spiradisc banner before creating drive chatter.  If my memory serves me correctly, it should boot to window blind filled game screen.

I'm no stranger to building from source on Linux, but rarely have a I had to assemble all the aclocal, automake, and autoconf components.  

This post is an anchor to help preserve the steps (the right steps perhaps?) which resulted in a working Epple2 build.

  • Download and unzip xa65
  • Edit Makefile of xa65 to set DESTDIR = ~/
  • Run make
  • Run make install
  • export PATH=${PATH}:~/bin
  • Download and unzip epple2 (great work Christopher M!) 
  • Run aclocal
  • Touch config.h.in
  • Run automake --add-missing
  • Edit configure variable named ac_default_prefix=/usr/local to be =~/
  • Run ./configure
  • Run make
  • Run make install
  • Download and unzip Apple-II-Source (this produces applesoft.a65 and monitor.a65)
  • Run aclocal
  • mkdir build-aux
  • automake --add-missing
  • autoconf
  • Run automake --add-missing
  • Edit configure variable named ac_default_prefix=/usr/local to be =~/
  • Run make
  • Run make install

After these steps, epple2 booted after I kicked off emulation with F1.  Prior to addingapplesoft.a65 and monitor.a65, I would get a low-res screen of colors, which I thought meant something.  

I hope this wrte-up help saves time.

Friday, January 11, 2019

Hey HughesNet, how about giving me a real boost?

A year ago, family signs up for HughesNet. Family decides to buy the HughesNet booster - which in concept is a sound idea - especially since Family decided to anchor the HughesNet modem/router at one far end of the house, but Family has an old house, and outlets are few in number, and those in the ideal Wifi coverage zone are encumbered by either being part of the Dining Room or the Kitchen (it's a long narrow kitchen layout - annoying for the cook, for sure.)  So, that's a pretty much a "no," for slapping in a wall wart in a public or volatile space, and absolute avoidance of the "mancave" loose snaking-across-floor or taped-to-ceiling cabling approach.  Furthermore, an additional "no" to the use of EoP/powerline adapters as fixture infrastructure.

I give full points to HughesNet for simplifying the installation of the booster, but I - I - yeah - I cannot give any points to an overpriced ($99) booster which fails to implement PoE and fails to use a standard 12V "router" plug.

However, I have good news: inexpensive PoE splitters with yellow plastic tips can be ever so slightly altered to permit the slightly larger size of the center conductor tip of the HughesNet booster.  With a hobby knife, gently widen the inner circumference until it snugly fits in to the booster.

The model number of the splitter with which I first tried and had first success was a U5160+U6023.

With this alteration, a single cable can be run cleanly through the attic, a clean fixture mounting, and no wall wart.

Times of Lorex

Once upon a time, I played a wonderfully designed game called Times of Lore.  While rummaging through some of my old disks late last year (2018), I found the backups I'd made of the game in 1991.  The disks were still integral, which pleased me, since it had been before the turn of the century that I last toyed with them, and easily 9 years before that, that I undermined the copy protection scheme.  At that time, I was in high school, and I almost never recorded my procedures, but for this game, I distinctly remember that the Apple II DuoDisk drives were very noisy compared to the older shoebox shaped (Shugart) drives.  That noisiness always seemed to lend me a clue.  When booting the store bought master of Times of Lore, I could hear track seek noises similar to ProDOS 1.1.1, and after much trial and error, I found I could mostly close the spring loaded drive latch, but not all the way, and at a precise moment, rip out the disk and swap in a PRODOS disk with Copy II Plus.  If my memory serves me correctly, this gave me the chance to lift the data and write it back absent any raw copy protection.

That bit of history is merely a limp segue in to a matter that has been consuming much of time as of late: a Lorex video surveillance system.

A family member purchased a Lorex system and then expected some relative would set it up.  Until the lucky day that I successfully cross-train in to a different field, I will invariably navigate the agonies of family knowing I'm classically trained in computing technology.

Family was told that setting up a Lorex was easy.  Family was told wrong.  A few takeaways for anyone planning to install a Lorex:

  • the Lorex 4K cameras have an excellent consumer grade picture.
  • the Lorex NVR PoE ports are totally adequate, but the way Lorex coils up the 90 foot cables, renders the cables quite uncooperative.  In cooler weather, I have found it is easier to make fresh runs from a CAT5e spool and terminate both ends.  
  • the Lorex NVR requires a Chrome plug-in that cannot be resolved.  The next bullet point has the work around.
  • direct access to the Lorex 4K cameras successfully directs Chrome to NACL Web Plugin that can manage the NVR.  
  • a building that's pre-wired or at least has existing conduit for cabling drastically reduces deployment time.  
  • The individual cameras have username/password logins that should be changed, but out of the box, do not require a password change.  If deploying a discrete PoE network, change the passwords of the cameras.
  • The NVR has generous list of other cameras which can be added, like Sony and JVC, to name a few.  Sadly, ABS MegaCams do not make the list.
  • The cameras will work via PoE with Ubiquiti ToughSwitches at the 48V setting, and Netgear GS-108/GS-110TP switches.
  • The Lorex utilizes a Western Digital purple surveillance drive.  I considered buying an SSD to replace it, since the environment will be harsh, but the purple drives appear to be well suited for this purpose. 
The time commitment is far more than I anticipated, with cabling and infrastructure being the most laborious and having the steepest learning curve.

Unresolved issues include:
  • Ensure that the request to not cloud sync the footage be honored.  Block with home router ACL sandwich? 
  • Standardizing footage/NVR administration.  Chrome? Android? Windows?
  • Securing the NVR.  Will a lock box with an aft fan suffice?
  • Securing the out of band camera network yet providing a means to relay SMTP alerts directly through Lorex's service.
  • PoE switches ideally should be placed in a convenient location, but this building is challenged by its 1960's construction, lacks grounded outlets, and any power to the attic.  Furthermore, the attic is accessible only from the stairwell to the basement.  It seems to make the most sense to locate the attic mounted PoE switches close to the attic entrance.
  • Many cables will have to be run.