Category Archives: Technodiscovery

Learning How-To do things.

The Value People Act

Proposal: Alteration to Federal (and State) IRS tax code to allow valuation of individual headcount as intangible assets.

Purpose: Establish a basic value (e.g. $10,000 USD) per full-time or part-time employee to be recorded as an intangible asset on an organization’s balance sheet.

Conditions: Does not differentiate employee headcount by any Title IX categories or payroll expense in calculating base employee value (i.e. a CEO is equal to a mailroom clerk for the purposes of base valuation)
Benefit and Risk Analysis:

Does not affect Income Statement (tax revenue) or alter normal payroll-related expenses.

Adds a “book value” for retaining and re-deploying employees for growth and/or surge capacity purposes.

The $10,000 USD valuation represents a basic worth of an individual based upon 50 percent of a federal minimum wage, less indirect and overhead expenses. It represents only a general approximation and is designed to level-out over the payroll headcount population.

Easily auditable as asset valuations (per balance sheet) are routinely audited by financial lenders, IRS, SEC, et.al – reconcilable against required payroll records. (1 SSN/TIN = 1 headcount)

We treat office furniture, computers, and capitalized assets as valuable because they are a measurement of an organization’s overall stability.

Employers who retain headcount, even if relatively idle or unapplied, are recognized financially for higher stability or capacity.

Employee headcount size should be an equal measure of a firm’s size, capacity and potential stability similar to how retaining capital assets and other forms of personal property increase an organization’s “book” value.

Encourages headcount retention, and discourages mass layoffs in favor of seeing longer term strategic investment in assets.

Discourages mass outsourcing of production labor to non-payroll entities by encouraging workforce stability.

Can build-in effective workplace presence conditions to encourage on-site workplace improvement (a person is counted as an asset, if primary work is at an organization’s established work location; work from home, remote work or field workers are not counted as an intangible asset for this recategorization – only work positions given a permanent work locations can be counted.)

Organizations seeking to abuse this policy are effectively self-policed. Inflated headcounts for over-valuation purposes are auditable, and ineffective at influencing Income Statement results (these are non-depreciated intangible assets.)

See-saw hiring and firing is discouraged as temporarily inflating asset valuation is similar to the effect of short-term seasonal location leasing (inflates short-term expenses and is adjusted during balance sheet analysis for purposes of capital valuation).

Existing Generally Accepted Accounting Practices need little to no modification to adopt this revision. The execution is the addition of an Intangible Asset category based upon payroll headcount multiplied by a fixed value (per above suggested $10,000 USD value)

An organization that attempts to abuse this intangible asset category, e.g. hiring 90 percent executive level employees and 10 percent production staff, sees the same value as a 10 percent executive staff with 90 percent production employees, but the effective impact is the high-expense to income ratio of the former versus the latter. (Asset Turnover ratios account for this attempt to circumvent the intent of this revision.)

Advertisement

Oracle EBS R12.2 Fix log4j vulnerability in AD/TXK.Delta.12/13

CVE-2021-44228 Advisory for Oracle E-Business Suite (Apache log4j Vulnerabilities) (Doc ID 2827804.1)

Applicability: Those who have either upgraded their 12.2 AD/TXK to either Delta.12/13 generally in preparation toward compatibility with 19c database upgrades, or have continuous patching policy promoting that component upgrade.

Prior AD/TXK releases did not employ the JNDI supporting log4j code.

The existing work-around fix, which later will be packaged in the next release of AD/TXK with the newer version of the log4j library that does not have the vulnerability, is quite simple. The instructions are to delete the vulnerable JndiLookup.class from the archive log4j_core.jar in which it was deployed.f

This file exists in two places: $COMMON_TOP for runtime use, and $FND_TOP, the patched staging version copied to $COMMON_TOP.

Please remember you need to fix both your Run and Patch filesystems, so you can run the fix once for each.

This is a scripted re-packaging of the steps outlined in the above MOS Doc ID 2827804.1 – modify to suit your particular installation and platform:

!/bin/ksh

Fix log4j vulnerability in AD/TXK.Delta.12/13

echo "\n Fix log4j vulnerability in AD/TXK.Delta.12/13 \n"
echo "CVE-2021-44228 Advisory for Oracle E-Business Suite (Apache log4j Vulnerabilities) (Doc ID 2827804.1) \n"
export jars="$FND_TOP/java/3rdparty/stdalone/log4j_core.jar $COMMON_TOP/java/lib/log4j_core.jar"
echo "\nCurrent copies of log4j_core.jar:\n"
for jar in $jars ;do ls -l $jar ;done
echo "\nBackup the existing log4j_core.jar in FND_TOP\n"
mv $FND_TOP/java/3rdparty/stdalone/log4j_core.jar $FND_TOP/java/3rdparty/stdalone/log4j_core.jar.bak
cp $FND_TOP/java/3rdparty/stdalone/log4j_core.jar.bak $FND_TOP/java/3rdparty/stdalone/log4j_core.jar
echo "\nDeleting JndiLookup.class from Jar archives\n"
for jar in $jars ;do zip -d $jar org/apache/logging/log4j/core/lookup/JndiLookup.class ;done
echo "\nVerify that size is smaller and dates are newer\n"
for jar in $jars ;do ls -l $jar ;done
echo "\nVerify that JndiLookup.class is no longer found in jars (0 files) :\n"
for jar in $jars ;do unzip -l -q $jar org/apache/logging/log4j/core/lookup/JndiLookup.class ;done
echo "\nNow bounce the MT services - adstpall.sh adstrtal.sh "
cd $ADMIN_SCRIPTS_HOME

For those with WebLogic based apps (Primavera, SOA Suite, etc.) this is the applicable MOS Doc for those:
Security Alert CVE-2021-44228 / CVE-2021-45046 Patch Availability Document for Oracle Fusion Middleware (Doc ID 2827793.1)

Evaluation of Log4j Use

  • The system classpath (CLASSPATH) is displayed during WebLogic Server startup by the startWebLogic script. It is also viewable in the DOMAIN_HOME/servers/[servername]/logs/[servername].out file.
  • Review the following to determine the impact and considerations for all Oracle products, which may be using these or different Log4j jar files:

    Doc ID 2827611.1 Apache Log4j Security Alert CVE-2021-44228 Products and Versions

WebLogic Server Installed Log4j Files

Apache Log4j version 2 is not used in default Oracle WebLogic Server installations or configurations. However, the Oracle WebLogic Server home contains vulnerable Log4j version 2 jars.

The version 2 jar files are in the ORACLE_HOME/oracle_common/modules/thirdparty directory for each version are:

12.2.1.3.0: log4j-1.2.17.jar
12.2.1.4.0: log4j-2.11.1.jar
14.1.1.0.0: log4j-core-2.11.1.jar and log4j-api-2.11.0.jar

Patch Availability for Oracle WebLogic Server and Oracle Fusion Middleware 

The patching requirements from addressing CVE-2021-44228 and CVE-2021-45046 are listed below with patch links for all versions under error correction support.

The patch has a prerequisite of the WebLogic Server PSU for Oct 2021:

WLS ReleaseRequired Patches
(Apply the WLS PSU and then the CVE Overlay)
14.1.1.0.0 WLS PATCH SET UPDATE 14.1.1.0.210930 (Patch 33416881)
    + WLS OVERLAY PATCH FOR 14.1.1.0.0 OCT 2021 PSU (Patch 33671996) for CVE-2021-44228,CVE-2021-45046
12.2.1.4.0 WLS PATCH SET UPDATE 12.2.1.4.210930 (Patch 33416868)
    WLS OVERLAY PATCH FOR 12.2.1.4.0 OCT 2021 PSU (Patch 33671996) for CVE-2021-44228,CVE-2021-45046
12.2.1.3.0 WLS PATCH SET UPDATE 12.2.1.3.210929 (Patch 33412599)
    + WLS OVERLAY PATCH FOR 12.2.1.3.0 OCT 2021 PSU (Patch 33671996) for CVE-2021-44228,CVE-2021-45046

Google forms and regular expressions for response validation

I was kind of shocked with the proliferation of teachers now using Google Classroom to conduct classes, that the documentation for the Quiz sections of the Classwork assignments is quite insufficient (or presumes you’re an IT geek like me, and can just figure out what programming is available to you.)

The example situation is given by this Blog entry related to Google Classroom and students’ answers being marked Incorrect because on Short Text responses, every answer is matched as a “literal” string – that is, upper and lowercase letters MATTER (a lot!)

Link to:
Student’s answers were marked wrongly in a short answer quiz by Google Forms.

https://support.google.com/edu/classroom/thread/39155344

The odd thing is while Google provided a solution for simple e-mail address validation, and various numerical responses, it’s been horrible at dealing with text answers.

The answer is in the 3rd category of Response Validation: Regular Expressions. RegEx’s are commonly used in programming languages and OS shells (like Linux, Unix, HPUX, etc.) since when scripting various commands, we often need to parse parameters and do things with various input like file directory listings, and long lists of things separated by some arbitrary character (like a comma or a vertical bar character.)

Thus here in my example, dealing with a student who was marked with Incorrect answers simply because they didn’t provide the exact case required by the 3 answer versions entered by the instructor (e.g. “Any Dog”, “any dog”, “ANY DOG”) – and the student typed “Any dog” and got it marked Incorrect.

One more typical way to prevent this is specifying in the Quiz preamble the exact format responses you want as an instructor for the short answers. For example, “Please enter all short answers in lowercase letters only, with no leading or trailing space or tab characters.”

But a more practical way is exercising that Regular Expression engine that’s built into Google Forms.

My example question wanting a response from the student like “inner core” (preferably providing a graphic picture of the planet’s layers and just labeling them A/B/C/D/E would have been simpler, but maybe I’m testing vocabulary at this point.)

Selecting the Response Validation type “Regular expression” and using “Matches” the pattern: “^[A-Z]” is interepreted as meaning, “if the Short answer text contains any uppercase letters from A to Z” then display the warning text “Please use all lowercase answers only!” – and do not accept the answer, as submitted.

Regular Expressions can get really complicated, but if you think of them as basically describing what’s in a string of text and matching it as either TRUE or FALSE (and preferably keeping your Answer expectations limited unless you happen to be teaching a course in OS-level scripting, in which case, go ahead and get as complicated as you’d like…) I think you’ll find your student’s will be gently guided into providing the answers in the form you were thinking of when you prepared the Quiz.

And isn’t that what this was all about in the first place?

Here’s a link to a more thorough (and lengthy, and complicated) discussion of the power of Google Forms using Regular Expressions:

Okay, an Autonomous Database is one that Wanders Off by Itself.

An odd Windows User Access Control error message.
Used without express permission from windowsinstructed.com

In a 2014 VoucherCloud.net (a coupon website) survey of the non-technical U.S. general public:

  • 11% believed HTML wss a sexually-transmitted disease
  • 51% believed a stormy weather condition would affect their access to the Cloud
  • 27% thought a gigabyte was a common insect in South America
  • 18% thought Blu-ray was a marine animal
  • 23% thought an MP3 was a Star Wars robot
  • 12% thought USB is the acronym for a European country
  • 42% said they believed a motherboard was “the deck of a cruise ship”
  • 77% could not identify what SEO means
  • 15% say software is comffortable clothing

However, 61% of the 2,392 respondents (18 and older) all thought it was important to have a good knowledge of technology.

That explains why in ZDNet’s Oracle’s Next Chapter: The Autonomous Database and the DBA (https://www.zdnet.com/article/oracles-next-chapter-the-autonomous-database-and-the-dba/) takes a bit of chewing to understand that when a vendor says “you’ll save $250K by moving to the Cloud,” that’s akin to someone saying, “You’ll save $40,000 by not buying a car, but renting it at $40/day from Hertz/Avis/Thrifty/Dollar.”

Well, yes, but not quite.  Rental agencies have those add-ons such as a Collision Damage Waiver, which can be thought of as the same as that 15% mandatory support fee needed on top of the core/sizing/machine-size fee, providing coverage in case something goes awry (or buggy, or BSOD’s for no known reason.)

And simply, if you stop paying, you don’t have a car/database/middleware/website.  If you do decide to opt for the rent-to-own option, just like the ubiquitous furniture rentals used by many seasoned relocation workers, doing so does cost much more than buying the furniture outright (but you don’t have to move it, and you get to turn it in, and trade-up or down when you wish, subject to the terms of your rental agreement.)

[Terms are important, as you will notice in the new Cloud On-Premise agreement, it does have a 4-year minimum term – similar to a limited term car lease.  And it comes with an early termination cost. And similarly it has “limited mileage” conditions, which if you go over your CPU/sizing/feature limits, you’ll simply be billed extra for that.  Convenience has costs.]

An autonomous database at this stage is similar to self-driving cars – given super-precise limitations, on a controlled environment, with well-defined conditions, yes, the Optimizer stays within the lanes and keeps the database engine humming along. Whence the odd situation is encountered, back to the driver/DBA to figure out what to-do and what went wrong.

The LA TImes article:
http://www.latimes.com/business/technology/la-fi-tn-1-10-americans-html-std-study-finds-20140304-story.html

The full VoucherCloud.net survey results:
https://drive.google.com/file/d/0B9HJeR-F9NIeczNDb2hVb2p6UTQ/edit

In closing, in case you missed it, Japan created a banana with edible peels: https://news.nationalgeographic.com/2018/01/edible-peel-bananas-created-japan-food-spd/

Top 7 Reasons Organizations Should Not Automatically Switch to Hosted Enterprise Technology

Cloud with No Symbol
Not Cloud?

A college education can make you think differently.  As I read the original article, the many times my Statistics professors pointed out that anyone can basically lie with numbers to make them support whichever position they want. This was equally true in a class I took on Mass Persuasion and Propaganda.

Thus I present this same article, with an inversion of the concluded statistical results of the IDG survey, with minor modifications to the explanations given to suit the results of the measures.  Respect given to the original author, Tori Ballantine, who is a product marketing lead at Hyland Cloud.  No offense is intended by this grammatical exercise in statistical results inversion.

Original Article:

Top 7 Reasons Manufacturers Should Host Enterprise Technology
https://www.mbtmag.com/article/2018/07/top-7-reasons-manufacturers-should-host-enterprise-technology

Top 7 Reasons Organizations Should Not Automatically Switch to Hosted or Cloud Enterprise Technology

As one of the leading industries that was an early adopter of process automation, manufacturing is often ahead of the curve when it comes to seeking ways to improve processes — yet still has work to do in the technology adoption realm. While the trend for cloud adoption is increasing over on-premises solutions overall, some organizations, including manufacturers, are hesitant to make the transition to the cloud.

There are countless compelling reasons to transition to hosted enterprise applications. According to a recent survey from IDG, IT leaders at companies with 250+ employees, from a wide range of industries and company sizes, agreed on seven areas where cloud computing should benefit their organizations. These included:

Disaster Recovery

Disasters, both natural and man-made, are inherently unpredictable. When the worst-case scenario happens, organizations need improved disaster recovery capabilities in place — including the economic resources to replicate content in multiple locations. According to the IDG survey, about 33 percent, of IT leaders did not find disaster recovery as the number one reason they would move, or have moved to hosted enterprise solutions. By switching to a hosted solution, about 1/3 of organizations could not get their crucial application running as soon as possible after an emergent situation, and are therefore unable to serve their customers.

Data Availability

IT leaders know that data and content are essential components of their daily business operations. In fact, according to the IDG research, 45 percent of survey participant listed data availability as the second leading limitation cited about cloud enterprise applications being unable to provide. Access to mission-critical information, when they need it, wherever they are, is essential for organizations to stay competitive and provide uninterrupted service. With no noticeable increase to uptime compared to on-premises applications, hosted solutions did not provide 24/7/365 data availability.

Cost Savings

It shouldn’t come as a surprise that the third most popular reason IT leaders seek cloud solutions is because of cost savings. Hosting in the cloud eliminates the need for upfront investment in hardware and the expense of maintaining and updated hosting infrastructure by shifting the cost basis to long-term operational costs. While hosting software solutions on-premises carries more than just risk; it carries a fair amount of operational costs. By hosting enterprise solutions in the cloud, organizations will reduce capital costs with a possible reduction in operating costs — including staffing, overtime, maintenance and physical security when centralized under a hosting provider.

Incident Response

The IDG survey found that 55 percent of IT professionals listed incident response as another area where cloud solutions provided no significant benefit over on-premises options. Large-scale systems can develop more efficient incident response capabilities, and improve incident response times compared to smaller, non-consolidated systems. As seconds tick by, compliance fines can increase along with end-user dissatisfaction. So having a quick incident response time is essential to reduce risk and ensure end-user satisfaction.

Security Expertise

The best providers that offer hosted solutions constantly evaluate and evolve their practices to protect customers’ data. This is crucial because up to 59 percent of IDG survey responders noted that security expertise as another leading reason they do not select cloud applications. Organizations with cloud-hosted applications could take advantage of the aggregated security expertise from their vendors to improve their own operations and make sure information is safe, but only by complying with externally-driven security standards that were either not enforceable due to application restrictions (legacy versioning, design constraints, third-party non-compliant architecture, et.al.) To ensure your content stays safe, it’s important to seek cloud providers with the right credentials — look for certifications such as SOC 1 and 2 or 3 audited, ISO 27001 and CSA STAR Registrant.

Geographical Disbursement

The IDG survey found that over 63 percent of IT professionals were not seeking geographical disbursement in where their data is stored. In the event of data unavailability in a local data center, having a copy of the data in a separate geographical area ensures performance and availability of the data sources, though resources to use the data may not be readily available as they are co-located in the local region of the primary data.

Expert Access

IT professionals seek hosted solutions because the best hosted software applications employ top-notch security professionals. Gaining access to these professionals’ insight helps ensure concerns are addressed and the software delivers on the organization’s needs.

In order to facilitate the best possible experience for your customers, it’s important to keep up with technology trends that give you the data and insights you need to provide quality service. For many firms, it means not only focusing on process automation on the manufacturing floor, but also within the internal processes driven by data. There’s a huge shift happening with how organizations choose to deploy software. In fact, according to a recent AIIM study, almost 25% of respondents from all industries are not seeking to deploy cloud software in any fashion. 60 percent of those surveyed plan to focus on a non-hybrid approach, focusing primarily on leveraging on-premises deployments, while 38 percent said they will deploy cloud solutions.

As noted in the seven areas above, the reasons for the lack of shift to selecting hosted enterprise applications are diverse and compelling. The cloud provides users with greater access to their information, when and where they need to access it — and doesn’t confine users to a on-premise data source. When combined with the other benefits of improved business continuity, cost savings, incident response, security expertise and expert access, organizations should carefully consider that their important information and content is more available and secure in the cloud.

 

Nespresso Vertuoline Coffee Capsule Brew Formulas

I like Nespresso’s Vertuoline single brew appliance – not because it’s convenient (which it is), nor because it’s quick (ditto). But because of inventive others like My-Cap.com, the rather otherwise wasteful aluminum capsules can be re-used indefinitely (as long as you’re careful not to pierce or dent them too much through handling.)

My-Cap.com foil kit for Nespresso Vertuoline
My-Cap.com foil kit for Nespresso Vertuoline

My-Cap.com makes foils, replacement capsules, little plastic quick caps, and all sorts of accessories for single-use pod/capsule coffee makers which definitely extends their environmental friendliness geometrically (which otherwise, at 1 plastic or aluminum pod per cup creates a ridiculous amount of landfill over time, if not properly recycled – Nespresso is one of the only company that provides free shipping for recycling capsules.)

My full user review is up at Amazon.com:

Nespresso Vertuoline Finished Lungo Custom Deathwish

https://www.amazon.com/review/R2CA7L5M1YTU1M/ref=cm_cr_rdp_perm

But this post is about a more in-depth feature about the Vertuoline capsule brewing formulation coded into the barcodes that surround the bottoms of each capsule.

Only recently has Nespresso started revealing the numerous formulas used in each version of the capsules, allowing those of us doing the re-use/re-pack/re-foil thing, to properly select a barcode that will work best with the coffee being refilled.

I’ll try to keep the following table updated as new information arrives on this subject:

Capsule Type Color Prewetting Flow Temperature
Altissio Espresso Dk Purple Short Slow High
Diavolitto Espresso Dk Blue Long Slow High
Voltesso Espresso Bright Gold Short Slow Low
Decaf Intenso Espresso Dk Red Long Fast High
Giornio Lungo Orange Long Slow Low
Solelio Lungo Yellow Short Fast Low
Intensio Lungo Dk Brown Long Med High
Stormio Lungo Dk Green Long Slow High
Odacio Lungo Med Blue Long Med High
Melozio Lungo Dk Gold Short Fast High
Elvazio Lungo Pink Short Slow->Fast Low->High
Decaffeinato Lungo Red Short Med Low
Half Caffeinato Lungo Red/Black Short Med Low
Cafe de Cuba Lungo White/Red script Short Slow Low
Flavored Lungo Various Short Med Low

My-Cap.com custom Deathwish after brewing
My-Cap.com custom Deathwish after brewing

I use #2 espresso grind for the Lungo size capsules (taking 10-12g of coffee grounds), and #1 espresso for the Espresso size (which take from 5-8g by comparison).  I pack each to within 1mm of the top of the flat rim of the capsule, which allows plenty of expansion room during the pre-wetting stage.

To remove the original foil, just hobby knife around the rounded part of the rim inside the flatter portion that the foil is glued to leaving a nice flat foil ring for the reusable foils to adhere to during extraction.

A cardboard my-cap storage sleeve
A cardboard my-cap storage sleeve

I also made up a few cardboard sleeves (ala Pringles style cans) that close-fit to the edges of the capsules to better keep the re-use foils on the capsules (the adhesive works well during extraction, but I disliked all the extra clamping and crimping others were doing to attempt to seal them better.  I find just placing them on top,  and running around the edge with the fan-brush handle is fine, then just gently fold over the edges.  The adhesive is enough to seal against the rim lip during extraction, and as long as you keep the capsules upright, they won’t spill.)

The price for convenience is that these little capsules do use about 200% more coffee per cup in order to reach the strength level of a standard French press style cup of coffee, but produce a good 2cm of crema in the process (more like a Vev-Vigano stove-top pressure coffee extractor.)  The espresso versions are a little lighter,  in that the brewing process is modified and thus consume closer to the “normal” puck’s worth of coffee per espresso (and they can be double-brewed – reset the cycle by turning the lock open, but don’t open it, then re-lock again to allow the 2nd button press, and re-trigger the pre-wetting cycle, if you prefer something like a 1-1/2 Espresso.)

Wire Bail Canning Jar Gasket Sizes – Fido, Bormioli, Le Parfait, et.al.

5.0L Common Canning Jar - Glass
5.0L Common Wire Bail Canning Jar – Glass

The rubber or silicone ring invariably wears out, cracks, splits or otherwise no longer seals properly.

Interestingly, I noticed that when browsing for replacements, there are a number of different pseudo-standard sizes involved.

Since they are flexible, one size too small may fit anyway, even if the gasket slightly buckles or curls, and sometimes the thickness varies enough (especially if too thick) that means you will need to adjust the wire band to accommodate the additional thickness (sometimes the lower band – the one on the jar itself, can be flipped upside down to give an additional 2mm of closure clearance for thicker 4mm gaskets – most gaskets are of the 2mm variety.)

Modern canning jar gaskets 70mm, 80mm, 100mm
Modern canning jar gaskets 70mm, 80mm, 100mm

Common Canning Wire Bail Jar Gasket Sizes with Inside and Outside Diameter measurements:

Antique 1/4" width canning jar gaskets
Antique 1/4″ width canning jar gaskets

* Antique gaskets were usually only 1/4″ wide instead of the modern 1/2″ width.

 

 

 

 

 

Ball wide-mouth versus regular-mouth glass canning jars
Ball wide-mouth versus regular-mouth glass canning jars

For reference, Modern Standard Ball/Mason Screw-on Canning Lid sizes:

  • Standard Mouth Ball 2-1/2″ (on center at lid seal) – 2-3/8″ (60mm) ID 2-5/8″ (67mm) OD – at jar opening.
  • Wide Mouth Ball 3-1/8″ (on center at lid seal) – 3.0″ (76mm) ID 3-1/4″ (83mm) OD – at jar opening.

 

 

 

There are also any number of decorative model jars which were very tiny in comparison (1″ 25mm ID) which were not meant for actual canning use, but were often used for salt and pepper shakers or for gifting jam samples. Naturally, getting replacement lids or gaskets for these is pretty much impossible other than finding something that would work at your local hardware store in the pipe and plumbing department.

Miniature 1-7/10" 42mm wire bail decorative canning jar
Miniature 1-7/10″ 42mm wire bail decorative canning jar

2" 51mm mini decorative mason jar
2″ 51mm mini decorative mason jar

Big Data Cloud Storage Framework Solutions

Based upon a recent meeting of minds at MESS (Media Entertainment & Scientific Systems) http://www.meetup.com/MESS-LA the challenge facing the industries is how to deal with petabyte-sized amassed data that still needs to be accessible in real-time for secured editing purposes by downstream customers and suppliers.
Here’s a multi-phase solution idea:

big_data_arch_model
My idea for a secure big file delivery model. Figured by J. H. Lui (c) 2016

Using torrent technology for access, with authenticated peer-to-peer hosts. private SSL-encrypted trackers/announcers, and encrypted bit streams, this maintains access to the fundamental data source using minimal infrastructure.
Add two-factor authentication to the authentication protocol to allow time- and role-based security to be enforced (so-and-so p2p host is authorized to connect to the torrent during X days/N hours per day/etc.)
Use generic two-factor authentication providers (e.g. Symantec VIP or SAASPass) to allow the small service providers to access data without excessive overhead cost, or dedicated hardware solutions.
Store the data source files using a torrent+sharding+bit-slicing protocol (similar to the Facebook imaging storage model.) Without authenticated access to the cloud torrent, any individual data chunk or shard grabbed by a sniffer becomes useless.
Segregate and divide the data files using a role-based security architecture (e.g. Scene 1 needed by X post-production editor, during N time-period.) Individual torrent participants can select the individual virtual file segments they need for work, without downloading the data chunks unrelated to them. Similarly, the above described time+role based security prevents access to/from data segments that are not authorized for that endpoint. Could even add password-protection to individual sensitive segments to provide one more level of turn-key security.
Use a Google Drive/Dropbox style OS protocol to allow mounting of the torrent sources to the end-user workstations with transparent access.  Whichever mechanism can provide adequate latency for the block replication should be sufficient.  Rather than mounting the same cloud torrent to every local workstation, use local NFS servers to provide local home-basing of the cloud mount (WAN speed), then export that mount locally (LAN speed) to the various workstations that need access to it.  That way, there’s only one penetration point to/from the cloud torrent, which can be adequately firewalled locally by the end-user. This is a solution for the end consumers that need access to the largest portion of the cloud data set.
The source data hives can use multi-path networking protocol ( https://jhlui1.wordpress.com/2015/05/21/multi-path-multiplexed-network-protocol-tcpip-over-mmnp-redundant-connections ) to further split and sub-divide the data streams (which are already encrypted), to maximize performance to bandwidth-limited consumer endpoints.
Media companies have a rather different data value model to deal with because during pre-production the data value is extremely high, but it drops off rapidly post-production release once the market consumes it. But the same model at a lower protection level would work for actual distribution – wherein end subscribers are authenticated for access to a particular resolution or feature set of the original cloud segments (e.g. 8K versus 1K media, or audio-only, or with or without Special Features access.)

Mommy, What’s a 404? Level3 Takes over Your DNS Lookup

An ancient hieroglyph of a Page Not Found error - 404
An ancient hieroglyph of a Page Not Found error – 404

One day, there will no longer be any unknown sites, addresses, or anything of the kind thanks to modern technological advances in how we look things up.  That little Address bar became the gateway to all sorts of inventive ideas of how to make our lives easier, simpler, more useful. Instead of remembering arcane website addresses, or before that, actual IP addresses, we’d have browser plugin helpers for our search engines to convert any text in that address bar to searchable terms for our favorite Google, Bing, Yahoo, Ask, DuckDuckGo, whatever engine to look it up for us.

BuyThisDomainAndMakeMeRich.com
BuyThisDomainAndMakeMeRich.com

Enterprising (sort of) domain campers tried to out-think our mistakes and pre-register every variant of a misspelled or syntactically-incorrect website address out there and re-direct them to their own domain-for-sale pages to generate income.

Mozilla thought ahead and offered it’s version of simpler language “friendly” 404 pages to describe in regular words what happened.

Firefox's Can't Find an Address Page (a dressed up-404 screen)
There is nothing to see here. Move along.

Easier, and easier. Less to remember every day. Your mind is becoming a blank page, open to whatever creative thought you can imagine, unhampered by useless memorized facts and figures.   Just type in whatever words you can remember describing what it is you were looking for, and presto, your browser (or Siri, or Cordera, or whatever) provides you with a nice list of places you meant to actually visit.

Thanks to some more inventive programmers, and some back-channel deals with the global service providers that actually lookup and translate the addresses to physical computers out there in the Cloud, you won’t have to be bothered with those pesky 404- errors ever again (unless you actually try typing in an invalid public IP address, in which case your browser search engine will take over and try to look that up.)

searchguide.level3.com 404 name lookup
Welcome to searchguide.level3.com – no more 404’s ever. Even if you wanted them.

Presenting searchguide.level3.com (operated by Yahoo! no less) which has partnered with several of the global domain name service (DNS) providers to re-direct those disappointing address-not-available lookups to its own pages providing you with nice (we knew you were actually trying to lookup using a search engine, so here’s our nice results list, including sponsored advertising ala AdChoice.)

Now, even your worst mistakes in typing can generate income for someone else.  Who’d have thought? Yea, Skynet/Genisys!

While it’s probably a matter of time before 90% of the well-known DNS service companies monetize their DNS services, leaving it up to you to either re-configure manual resolv.conf files pointing to non- monetized lookups, or at least switching to Google’s Public DNS (which tracks you everywhere you click anyway – 8.8.8.8 and 8.8.4.4) or any of the ones that still remain https://vpsboard.com/topic/2542-level3-public-dns-servers-search-engine-redirect/

Eventually, we’ll probably see the AltInternet end up creating its own subterranean DNS similar to what Tor still does.

Social Media, Explained in Java (extended)

Coffee and Social Media Icons
https://www.behance.net/JenniferHudsonTaylor

The original entries:

Facebook: I like drinking coffee.

Twitter: I’m drinking coffee.

YouTube: Watch my cat drink coffee.

Instagram: Selfie: Me and my cat sipping coffee before I leave for school today.

Pinterest: How to make coffee.

LinkedIn:  Skills – Can make great coffee

And…my new ones:

Google+: +1 for drinking coffee

Japanese Twitter: ソーシャルメディアとコーヒー かわいい!

Vine: 2 second video loop: Sip of coffee

Line: _c(_)_ (=^?^=)

Yelp: 4-stars for coffee at my home (-1 star for having to make it myself)

Foursquare: Tip – coffee is better in a cup with a cat

Swarm: 20 others are drinking coffee here.

Myspace: 10 little known songs about coffee, plus Timberlake’s cover of F. Sinatra’s Coffee song

Reddit: I hate all people who drink coffee with cats (troll)

HuffPost: Why drinking coffee with your cat makes you live longer

Blogger/WordPress/Tumblr/LiveJournal/TypePad: My thoughts on drinking coffee with cats

Flipboard: A visual magazine dedicated to cats sitting next to coffee cups

te@achthought: How to get students to drink less coffee and pay more attention to their cats

Buffer: “The only escape from the miseries of life are music and cats…” (…and coffee)- Albert Schweitzer

About.me: Why I drink coffee

Bebo: #coffeeiskewl

delicio.us: The best sites for coffee

DeviantArt: a pop-art collage of cats sitting in coffee

AdWords: Save $1 at Starbucks and PetCo NOW!

Flickr: My collection of coffee pix

Influenster: Free sample of Folger’s MicroRoast for your review

Meetup: 5PM PT @ Jon’s Koffee Hut – open invite

eVite: Having a coffee party with my cat

Amazon.com: 3-stars. Bought this coffee – my cat hated it, but I loved it.

Smartphone Tablet Art Controller App – WiFi Digital Photo Frames Managed by Template

Simple concept – we’ve bought those digital photo frames that can take various memory cards and flash drives to display our photos. And some of them have become WiFi enabled so you can load pictures from your favorite online cloud storage (i.e. Photobucket, Flickr, Snapfish, etc.)

But what about an app to manage such frames all around your house (or office, or college, or whereever?)

Start with a basic photo library app that can build normal collections and folders, but extend the functionality to allow multiple digital photo frames (or even Smart TV’s with WiFi photo RSS feed capability) to be loaded on-demand with your choice of photos on-demand.

SDWiFiCardUse WiFi compatible SD cards like these to provide the basic connectivity, but assign each device (which usually end up with a local IP address) as a controllable frame within the collection application (e.g. Frame 1 (living room), Frame 2 (kitchen), Frame 3 through 5 (hallway), etc.) Now assign those IP’s to a template “gallery” for the App to manage the content and placement.

Simple uses might be: changing all the digital frames in your house to display your best children’s photos during Mother or Father’s Day.  Load historical photos during national holidays. Celebrate a big birthday with a rolling series of funny or serious This is Your Life photos, all being loaded and timed automatically to change at pre-determined intervals.

More advanced use might be professional gallery management, so you can provide previews of gallery forthcoming openings by using inexpensive 11×14 digital frames to give guests an idea of what’s coming next.  Or artists might even end up programming the templates as interactive media showcases or exhibitions unto themselves.

The smartphone or tablet component (or any touchscreen capability)

Set of touchscreen smartphones
Set of touchscreen smartphones

makes it easier to drag and drop photos to specific frames in the template – imagine the application having a basic floorplan of your house with the various digital frames in placeholder positions, so you could drag and drop photos into them as collection sets.  And save them.  And load them instantly.

@jhlui1 #DreamBig #ChangeTheWorld

Multi-path Multiplexed Network Protocol (TCP/IP over MMNP) Redundant Connections

Because connectivity is becoming less a convenience and more often a necessity, if not a criticality, there will be a built-in demand for 24×7 connectivity to/from data sources and targets.

In professional audio, wireless mics used to be a particular problematic technology – while allowing free-roaming around the stage, they were subject to drop-outs and interference from multiple sources, causing unacceptable interruptions in the audio signal quality of a performance. The manufacturers got together and created multi-channel multiplexing allowing transmission of the same signal over multiple channels simultaneously, so that if one channel were interrupted, the other(s) could continue unimpeded and guarantee interruption-free signals.

Now we need the same thing applied to network technology – in particular, the ever-expanding Internet.  Conventional Transmission Control Protocol/Internet Protocol (TCP/IP) addresses single source and single destination routing.  Each packet of data has sender and receiver information with it, plus a few extra bytes for redundancy and integrity checking, so that the receiver is guaranteed that it receives what was originally sent.

The problem occurs when that primary network connection is lost.  The protcol calls for re-transmit requests and allows for re-tries, but effectively once a connection goes down, it is up to the application to decide how to deal with the disconnection.

The answer may be the same as applied to those wireless microphones.  Imagine two router-connected devices, for example a computer and it’s internet DSL box.  Usually only one wire connects the two and if the wire is broken, lost, disconnected, the transmission halts abruptly.

Now imagine having 2 or 4 Cat-5 cables between the devices, along with a network-layer appliance that takes the original TCP/IP packet from the sender and adds rider packets with it to include a path number (i.e. cable-1 to cable-4), plus a timing packet (similar to SMPTE code) that allows the receiver appliance to ensure packets received out-of-order due to latency in different paths, are re-assembled back in the sequential order as they were transmitted.

Then run these time-stamped and route-encoded duplicate packets through a standard compression and encryption algorithm to negate the effects of the added time and channel packet overhead.

[Addendum: 22-MAY-2015] Think of this time+route concept similar to how BitTorrent operates.  There are already companies working on channel aggregation appliances, but usually for combining bandwidth.  This approach is focused on the signal continuity aspect of the channel communication.

Reverse the process at the receiving end, and repeat the algorithm for the reverse-data path.

Transmitter] — [data+time+channelID] — [compression/decompression ] => (multiple connection routes) => [resequencer] — [Receiver

Time for some creative geniuses to make this happen, yesterday.  Banks need it. Companies need it. Even the communication carriers need this.

@jhlui1 #DreamBig #ChangeTheWorld