Well, yes, but not quite. Rental agencies have those add-ons such as a Collision Damage Waiver, which can be thought of as the same as that 15% mandatory support fee needed on top of the core/sizing/machine-size fee, providing coverage in case something goes awry (or buggy, or BSOD’s for no known reason.)
And simply, if you stop paying, you don’t have a car/database/middleware/website. If you do decide to opt for the rent-to-own option, just like the ubiquitous furniture rentals used by many seasoned relocation workers, doing so does cost much more than buying the furniture outright (but you don’t have to move it, and you get to turn it in, and trade-up or down when you wish, subject to the terms of your rental agreement.)
[Terms are important, as you will notice in the new Cloud On-Premise agreement, it does have a 4-year minimum term – similar to a limited term car lease. And it comes with an early termination cost. And similarly it has “limited mileage” conditions, which if you go over your CPU/sizing/feature limits, you’ll simply be billed extra for that. Convenience has costs.]
An autonomous database at this stage is similar to self-driving cars – given super-precise limitations, on a controlled environment, with well-defined conditions, yes, the Optimizer stays within the lanes and keeps the database engine humming along. Whence the odd situation is encountered, back to the driver/DBA to figure out what to-do and what went wrong.
A college education can make you think differently. As I read the original article, the many times my Statistics professors pointed out that anyone can basically lie with numbers to make them support whichever position they want. This was equally true in a class I took on Mass Persuasion and Propaganda.
Thus I present this same article, with an inversion of the concluded statistical results of the IDG survey, with minor modifications to the explanations given to suit the results of the measures. Respect given to the original author, Tori Ballantine, who is a product marketing lead at Hyland Cloud. No offense is intended by this grammatical exercise in statistical results inversion.
Top 7 Reasons Organizations Should Not Automatically Switch to Hosted or Cloud Enterprise Technology
As one of the leading industries that was an early adopter of process automation, manufacturing is often ahead of the curve when it comes to seeking ways to improve processes — yet still has work to do in the technology adoption realm. While the trend for cloud adoption is increasing over on-premises solutions overall, some organizations, including manufacturers, are hesitant to make the transition to the cloud.
There are countless compelling reasons to transition to hosted enterprise applications. According to a recent survey from IDG, IT leaders at companies with 250+ employees, from a wide range of industries and company sizes, agreed on seven areas where cloud computing should benefit their organizations. These included:
Disasters, both natural and man-made, are inherently unpredictable. When the worst-case scenario happens, organizations need improved disaster recovery capabilities in place — including the economic resources to replicate content in multiple locations. According to the IDG survey, about 33 percent, of IT leaders did not find disaster recovery as the number one reason they would move, or have moved to hosted enterprise solutions. By switching to a hosted solution, about 1/3 of organizations could not get their crucial application running as soon as possible after an emergent situation, and are therefore unable to serve their customers.
IT leaders know that data and content are essential components of their daily business operations. In fact, according to the IDG research, 45 percent of survey participant listed data availability as the second leading limitation cited about cloud enterprise applications being unable to provide. Access to mission-critical information, when they need it, wherever they are, is essential for organizations to stay competitive and provide uninterrupted service. With no noticeable increase to uptime compared to on-premises applications, hosted solutions did not provide 24/7/365 data availability.
It shouldn’t come as a surprise that the third most popular reason IT leaders seek cloud solutions is because of cost savings. Hosting in the cloud eliminates the need for upfront investment in hardware and the expense of maintaining and updated hosting infrastructure by shifting the cost basis to long-term operational costs. While hosting software solutions on-premises carries more than just risk; it carries a fair amount of operational costs. By hosting enterprise solutions in the cloud, organizations will reduce capital costs with a possible reduction in operating costs — including staffing, overtime, maintenance and physical security when centralized under a hosting provider.
The IDG survey found that 55 percent of IT professionals listed incident response as another area where cloud solutions provided no significant benefit over on-premises options. Large-scale systems can develop more efficient incident response capabilities, and improve incident response times compared to smaller, non-consolidated systems. As seconds tick by, compliance fines can increase along with end-user dissatisfaction. So having a quick incident response time is essential to reduce risk and ensure end-user satisfaction.
The best providers that offer hosted solutions constantly evaluate and evolve their practices to protect customers’ data. This is crucial because up to 59 percent of IDG survey responders noted that security expertise as another leading reason they do not select cloud applications. Organizations with cloud-hosted applications could take advantage of the aggregated security expertise from their vendors to improve their own operations and make sure information is safe, but only by complying with externally-driven security standards that were either not enforceable due to application restrictions (legacy versioning, design constraints, third-party non-compliant architecture, et.al.) To ensure your content stays safe, it’s important to seek cloud providers with the right credentials — look for certifications such as SOC 1 and 2 or 3 audited, ISO 27001 and CSA STAR Registrant.
The IDG survey found that over 63 percent of IT professionals were not seeking geographical disbursement in where their data is stored. In the event of data unavailability in a local data center, having a copy of the data in a separate geographical area ensures performance and availability of the data sources, though resources to use the data may not be readily available as they are co-located in the local region of the primary data.
IT professionals seek hosted solutions because the best hosted software applications employ top-notch security professionals. Gaining access to these professionals’ insight helps ensure concerns are addressed and the software delivers on the organization’s needs.
In order to facilitate the best possible experience for your customers, it’s important to keep up with technology trends that give you the data and insights you need to provide quality service. For many firms, it means not only focusing on process automation on the manufacturing floor, but also within the internal processes driven by data. There’s a huge shift happening with how organizations choose to deploy software. In fact, according to a recent AIIM study, almost 25% of respondents from all industries are not seeking to deploy cloud software in any fashion. 60 percent of those surveyed plan to focus on a non-hybrid approach, focusing primarily on leveraging on-premises deployments, while 38 percent said they will deploy cloud solutions.
As noted in the seven areas above, the reasons for the lack of shift to selecting hosted enterprise applications are diverse and compelling. The cloud provides users with greater access to their information, when and where they need to access it — and doesn’t confine users to a on-premise data source. When combined with the other benefits of improved business continuity, cost savings, incident response, security expertise and expert access, organizations should carefully consider that their important information and content is more available and secure in the cloud.
I like Nespresso’s Vertuoline single brew appliance – not because it’s convenient (which it is), nor because it’s quick (ditto). But because of inventive others like My-Cap.com, the rather otherwise wasteful aluminum capsules can be re-used indefinitely (as long as you’re careful not to pierce or dent them too much through handling.)
My-Cap.com makes foils, replacement capsules, little plastic quick caps, and all sorts of accessories for single-use pod/capsule coffee makers which definitely extends their environmental friendliness geometrically (which otherwise, at 1 plastic or aluminum pod per cup creates a ridiculous amount of landfill over time, if not properly recycled – Nespresso is one of the only company that provides free shipping for recycling capsules.)
But this post is about a more in-depth feature about the Vertuoline capsule brewing formulation coded into the barcodes that surround the bottoms of each capsule.
Only recently has Nespresso started revealing the numerous formulas used in each version of the capsules, allowing those of us doing the re-use/re-pack/re-foil thing, to properly select a barcode that will work best with the coffee being refilled.
I’ll try to keep the following table updated as new information arrives on this subject:
Cafe de Cuba
I use #2 espresso grind for the Lungo size capsules (taking 10-12g of coffee grounds), and #1 espresso for the Espresso size (which take from 5-8g by comparison). I pack each to within 1mm of the top of the flat rim of the capsule, which allows plenty of expansion room during the pre-wetting stage.
To remove the original foil, just hobby knife around the rounded part of the rim inside the flatter portion that the foil is glued to leaving a nice flat foil ring for the reusable foils to adhere to during extraction.
I also made up a few cardboard sleeves (ala Pringles style cans) that close-fit to the edges of the capsules to better keep the re-use foils on the capsules (the adhesive works well during extraction, but I disliked all the extra clamping and crimping others were doing to attempt to seal them better. I find just placing them on top, and running around the edge with the fan-brush handle is fine, then just gently fold over the edges. The adhesive is enough to seal against the rim lip during extraction, and as long as you keep the capsules upright, they won’t spill.)
The price for convenience is that these little capsules do use about 200% more coffee per cup in order to reach the strength level of a standard French press style cup of coffee, but produce a good 2cm of crema in the process (more like a Vev-Vigano stove-top pressure coffee extractor.) The espresso versions are a little lighter, in that the brewing process is modified and thus consume closer to the “normal” puck’s worth of coffee per espresso (and they can be double-brewed – reset the cycle by turning the lock open, but don’t open it, then re-lock again to allow the 2nd button press, and re-trigger the pre-wetting cycle, if you prefer something like a 1-1/2 Espresso.)
The rubber or silicone ring invariably wears out, cracks, splits or otherwise no longer seals properly.
Interestingly, I noticed that when browsing for replacements, there are a number of different pseudo-standard sizes involved.
Since they are flexible, one size too small may fit anyway, even if the gasket slightly buckles or curls, and sometimes the thickness varies enough (especially if too thick) that means you will need to adjust the wire band to accommodate the additional thickness (sometimes the lower band – the one on the jar itself, can be flipped upside down to give an additional 2mm of closure clearance for thicker 4mm gaskets – most gaskets are of the 2mm variety.)
Common Canning Wire Bail Jar Gasket Sizes with Inside and Outside Diameter measurements:
* Antique gaskets were usually only 1/4″ wide instead of the modern 1/2″ width.
For reference, Modern Standard Ball/Mason Screw-on Canning Lid sizes:
Standard Mouth Ball 2-1/2″ (on center at lid seal) – 2-3/8″ (60mm) ID 2-5/8″ (67mm) OD – at jar opening.
Wide Mouth Ball 3-1/8″ (on center at lid seal) – 3.0″ (76mm) ID 3-1/4″ (83mm) OD – at jar opening.
There are also any number of decorative model jars which were very tiny in comparison (1″ 25mm ID) which were not meant for actual canning use, but were often used for salt and pepper shakers or for gifting jam samples. Naturally, getting replacement lids or gaskets for these is pretty much impossible other than finding something that would work at your local hardware store in the pipe and plumbing department.
Based upon a recent meeting of minds at MESS (Media Entertainment & Scientific Systems) http://www.meetup.com/MESS-LA the challenge facing the industries is how to deal with petabyte-sized amassed data that still needs to be accessible in real-time for secured editing purposes by downstream customers and suppliers.
Here’s a multi-phase solution idea:
Using torrent technology for access, with authenticated peer-to-peer hosts. private SSL-encrypted trackers/announcers, and encrypted bit streams, this maintains access to the fundamental data source using minimal infrastructure.
Add two-factor authentication to the authentication protocol to allow time- and role-based security to be enforced (so-and-so p2p host is authorized to connect to the torrent during X days/N hours per day/etc.)
Use generic two-factor authentication providers (e.g. Symantec VIP or SAASPass) to allow the small service providers to access data without excessive overhead cost, or dedicated hardware solutions.
Store the data source files using a torrent+sharding+bit-slicing protocol (similar to the Facebook imaging storage model.) Without authenticated access to the cloud torrent, any individual data chunk or shard grabbed by a sniffer becomes useless.
Segregate and divide the data files using a role-based security architecture (e.g. Scene 1 needed by X post-production editor, during N time-period.) Individual torrent participants can select the individual virtual file segments they need for work, without downloading the data chunks unrelated to them. Similarly, the above described time+role based security prevents access to/from data segments that are not authorized for that endpoint. Could even add password-protection to individual sensitive segments to provide one more level of turn-key security.
Use a Google Drive/Dropbox style OS protocol to allow mounting of the torrent sources to the end-user workstations with transparent access. Whichever mechanism can provide adequate latency for the block replication should be sufficient. Rather than mounting the same cloud torrent to every local workstation, use local NFS servers to provide local home-basing of the cloud mount (WAN speed), then export that mount locally (LAN speed) to the various workstations that need access to it. That way, there’s only one penetration point to/from the cloud torrent, which can be adequately firewalled locally by the end-user. This is a solution for the end consumers that need access to the largest portion of the cloud data set.
The source data hives can use multi-path networking protocol ( https://jhlui1.wordpress.com/2015/05/21/multi-path-multiplexed-network-protocol-tcpip-over-mmnp-redundant-connections ) to further split and sub-divide the data streams (which are already encrypted), to maximize performance to bandwidth-limited consumer endpoints.
Media companies have a rather different data value model to deal with because during pre-production the data value is extremely high, but it drops off rapidly post-production release once the market consumes it. But the same model at a lower protection level would work for actual distribution – wherein end subscribers are authenticated for access to a particular resolution or feature set of the original cloud segments (e.g. 8K versus 1K media, or audio-only, or with or without Special Features access.)
One day, there will no longer be any unknown sites, addresses, or anything of the kind thanks to modern technological advances in how we look things up. That little Address bar became the gateway to all sorts of inventive ideas of how to make our lives easier, simpler, more useful. Instead of remembering arcane website addresses, or before that, actual IP addresses, we’d have browser plugin helpers for our search engines to convert any text in that address bar to searchable terms for our favorite Google, Bing, Yahoo, Ask, DuckDuckGo, whatever engine to look it up for us.
Enterprising (sort of) domain campers tried to out-think our mistakes and pre-register every variant of a misspelled or syntactically-incorrect website address out there and re-direct them to their own domain-for-sale pages to generate income.
Mozilla thought ahead and offered it’s version of simpler language “friendly” 404 pages to describe in regular words what happened.
Easier, and easier. Less to remember every day. Your mind is becoming a blank page, open to whatever creative thought you can imagine, unhampered by useless memorized facts and figures. Just type in whatever words you can remember describing what it is you were looking for, and presto, your browser (or Siri, or Cordera, or whatever) provides you with a nice list of places you meant to actually visit.
Thanks to some more inventive programmers, and some back-channel deals with the global service providers that actually lookup and translate the addresses to physical computers out there in the Cloud, you won’t have to be bothered with those pesky 404- errors ever again (unless you actually try typing in an invalid public IP address, in which case your browser search engine will take over and try to look that up.)
Now, even your worst mistakes in typing can generate income for someone else. Who’d have thought? Yea, Skynet/Genisys!
While it’s probably a matter of time before 90% of the well-known DNS service companies monetize their DNS services, leaving it up to you to either re-configure manual resolv.conf files pointing to non- monetized lookups, or at least switching to Google’s Public DNS (which tracks you everywhere you click anyway – 18.104.22.168 and 22.214.171.124) or any of the ones that still remain https://vpsboard.com/topic/2542-level3-public-dns-servers-search-engine-redirect/
Eventually, we’ll probably see the AltInternet end up creating its own subterranean DNS similar to what Tor still does.
Simple concept – we’ve bought those digital photo frames that can take various memory cards and flash drives to display our photos. And some of them have become WiFi enabled so you can load pictures from your favorite online cloud storage (i.e. Photobucket, Flickr, Snapfish, etc.)
But what about an app to manage such frames all around your house (or office, or college, or whereever?)
Start with a basic photo library app that can build normal collections and folders, but extend the functionality to allow multiple digital photo frames (or even Smart TV’s with WiFi photo RSS feed capability) to be loaded on-demand with your choice of photos on-demand.
Use WiFi compatible SD cards like these to provide the basic connectivity, but assign each device (which usually end up with a local IP address) as a controllable frame within the collection application (e.g. Frame 1 (living room), Frame 2 (kitchen), Frame 3 through 5 (hallway), etc.) Now assign those IP’s to a template “gallery” for the App to manage the content and placement.
Simple uses might be: changing all the digital frames in your house to display your best children’s photos during Mother or Father’s Day. Load historical photos during national holidays. Celebrate a big birthday with a rolling series of funny or serious This is Your Life photos, all being loaded and timed automatically to change at pre-determined intervals.
More advanced use might be professional gallery management, so you can provide previews of gallery forthcoming openings by using inexpensive 11×14 digital frames to give guests an idea of what’s coming next. Or artists might even end up programming the templates as interactive media showcases or exhibitions unto themselves.
The smartphone or tablet component (or any touchscreen capability)
makes it easier to drag and drop photos to specific frames in the template – imagine the application having a basic floorplan of your house with the various digital frames in placeholder positions, so you could drag and drop photos into them as collection sets. And save them. And load them instantly.
Because connectivity is becoming less a convenience and more often a necessity, if not a criticality, there will be a built-in demand for 24×7 connectivity to/from data sources and targets.
In professional audio, wireless mics used to be a particular problematic technology – while allowing free-roaming around the stage, they were subject to drop-outs and interference from multiple sources, causing unacceptable interruptions in the audio signal quality of a performance. The manufacturers got together and created multi-channel multiplexing allowing transmission of the same signal over multiple channels simultaneously, so that if one channel were interrupted, the other(s) could continue unimpeded and guarantee interruption-free signals.
Now we need the same thing applied to network technology – in particular, the ever-expanding Internet. Conventional Transmission Control Protocol/Internet Protocol (TCP/IP) addresses single source and single destination routing. Each packet of data has sender and receiver information with it, plus a few extra bytes for redundancy and integrity checking, so that the receiver is guaranteed that it receives what was originally sent.
The problem occurs when that primary network connection is lost. The protcol calls for re-transmit requests and allows for re-tries, but effectively once a connection goes down, it is up to the application to decide how to deal with the disconnection.
The answer may be the same as applied to those wireless microphones. Imagine two router-connected devices, for example a computer and it’s internet DSL box. Usually only one wire connects the two and if the wire is broken, lost, disconnected, the transmission halts abruptly.
Now imagine having 2 or 4 Cat-5 cables between the devices, along with a network-layer appliance that takes the original TCP/IP packet from the sender and adds rider packets with it to include a path number (i.e. cable-1 to cable-4), plus a timing packet (similar to SMPTE code) that allows the receiver appliance to ensure packets received out-of-order due to latency in different paths, are re-assembled back in the sequential order as they were transmitted.
Then run these time-stamped and route-encoded duplicate packets through a standard compression and encryption algorithm to negate the effects of the added time and channel packet overhead.
[Addendum: 22-MAY-2015] Think of this time+route concept similar to how BitTorrent operates. There are already companies working on channel aggregation appliances, but usually for combining bandwidth. This approach is focused on the signal continuity aspect of the channel communication.
Reverse the process at the receiving end, and repeat the algorithm for the reverse-data path.
Ever search for photos of a Suzuki S40 (nee’ LS650 Savage) and notice everybody shoots from right-side of the bike?
I personally think that’s because we’re all painfully aware of how woefully inadequate the 30+ year old solid single front-disc with a single pot caliper has been (especially when freeway speeds started jumping up.) And why would you want to take a photo of something that heats up too fast, feels spongy, even with better brake lines, and isn’t so bad tooling around and cruising at low speeds, but feels like you’re riding a Schwinn Stingray (wherein your rear drum brake locks up, and skids along, while the front is still grabbing air.)
And now Suzuki S40 and LS 650 (and Ryca conversion) owners can have fully-floating rebuildable discs with dual-pot calipers and bring their bikes into the 21st century (or at least into the last decade of the 20th, if you prefer.) It’s not ABS (which would cost more than the whole bike) but it’s a welcome upgrade that bolts-on (literally) and is very well-engineered and designed using already proven technology (just assembled in a different way, with an engineer’s eye towards functionality and purpose).
It’s affordable, and just what this little kicker needed among the plethora of engine-specific upgrades that already address more horsepower (web cams, big bore kits, bigger carburetors and jetting, revised exhausts, etc.).
With the forthcoming (but already available) SQLDeveloper 4.1 edition, an improved version of the Oracle Data Miner tools is incorporated into the SQLDeveloper console. However, I found that there were a number of steps needed to actually use this new data modeling product other than just responding ‘Yes’ to the “Do you wish to enable the Data Miner Repository on this database?” prompt.
Here’s what I ended up doing to get things up and running (so that I could play with data modeling and visualization using Excel and the new SQLDeveloper DM extensions.)
#In this case, I’m adding back the demonstration data (i.e. EMP, DEPTNO type tables; the SH, OE, HR, et.al. schemas) into an existing R12 e-Business Suite (12.1.3) instance.
# Installing the Oracle Demo data in an R12 instance.
# Use the runInstaller from the R12 $ORACLE_HOME
export DISPLAY=<workstation IP>:0.0
# Choose the source products.xml from the staging area – Download and stage the DB Examples CD from OTN
# Complete the OUI installation through [Finish]
mkdir -p $ORACLE_HOME/demo/schema/log
echo $ORACLE_HOME/demo/schema/log/ ## used to respond to the Log Directory prompt during mksample.sql
sqlplus “/ as sysdba”
— will need passwords for: SYS/SYSTEM and APPS (used for all of the demo schemas, some of which pre-exist such as, HR, OE (PM, IX, SH and BI were okay for 12.1.3).
— ## Be sure to comment out any DROP USER <HR, OE, etc.) commands in this script (or you will be restoring your EBS instance from a backup because it just dropped your Module schema tables…) ##
— They look like this:
mksample.sql:– DROP USER hr CASCADE;
mksample.sql:– DROP USER oe CASCADE;
mksample.sql:DROP USER pm CASCADE;
mksample.sql:DROP USER ix CASCADE;
mksample.sql:DROP USER sh CASCADE;
mksample.sql:DROP USER bi CASCADE;
–Similarly – if/when you decide you no longer need the data – do NOT just use the $ORACLE_HOME/demo/schema/drop_sch.sql script
–or you just dropped your HR/OE/BI EBS schemas; don’t do that.
drop_sch.sql:PROMPT Dropping Sample Schemas
drop_sch.sql:– DROP USER hr CASCADE;
drop_sch.sql:– DROP USER oe CASCADE;
drop_sch.sql:DROP USER pm CASCADE;
drop_sch.sql:DROP USER ix CASCADE;
drop_sch.sql:DROP USER sh CASCADE;
drop_sch.sql:DROP USER bi CASCADE;
order_entry/oe_main.sql:– Dropping the user with all its objects
order_entry/oe_main.sql:– DROP USER oe CASCADE;
order_entry/oe_main.sql:– ALTER USER oe DEFAULT TABLESPACE &tbs QUOTA UNLIMITED ON &tbs;
— in this instance the $APPS_PW is synchronized to all application module schemas (i.e. AR, HR, GL, etc.)
— log directory would be the actual path from echo $ORACLE_HOME/demo/schema/log/ (including the trailing slash)
# to additionally create the Data Mining user (DM in this case)
create user &&dmuser identified by &&dmuserpwd
default tablespace &&usertblspc
temporary tablespace &&temptblspc
quota unlimited on &&usertblspc;
GRANT CREATE JOB TO &&dmuser;
GRANT CREATE MINING MODEL TO &&dmuser; — required for creating models
GRANT CREATE PROCEDURE TO &&dmuser;
GRANT CREATE SEQUENCE TO &&dmuser;
GRANT CREATE SESSION TO &&dmuser;
GRANT CREATE SYNONYM TO &&dmuser;
GRANT CREATE TABLE TO &&dmuser;
GRANT CREATE TYPE TO &&dmuser;
GRANT CREATE VIEW TO &&dmuser;
GRANT EXECUTE ON ctxsys.ctx_ddl TO &&dmuser;
GRANT CREATE ANY DIRECTORY TO &&dmuser;
— Grant the SH Demo table and package objects to the DM user
— Create the Data Mining Views against the SH Demo table and package objects
@?/rdbms/demo/dmabdemo.sql — Builds the Adaptive Baynes Model demo
@?/rdbms/demo/dmaidemo.sql — Builds the Attribute Importance demo
@?/rdbms/demo/dmardemo.sql — Builds the Association Rules demo
@?/rdbms/demo/dmdtdemo.sql — Builds the Decision Tree demo
@?/rdbms/demo/dmdtxvlddemo.sql — Builds the Cross Validation demo
@?/rdbms/demo/dmglcdem.sql — Builds the Generalized Linear model demo
@?/rdbms/demo/dmglrdem.sql — Builds the General Linear Regression model demo
@?/rdbms/demo/dmhpdemo.sql — not a Data Mining program – Hierarchical Profiler
@?/rdbms/demo/dmkmdemo.sql — Builds the K-Means Clustering model demo
@?/rdbms/demo/dmnbdemo.sql — Builds the Naive Baynes Model data
@?/rdbms/demo/dmnmdemo.sql — Builds the Non-negative Matrix Factorization model
@?/rdbms/demo/dmocdemo.sql — Builds the O-Cluster model Demo
@?/rdbms/demo/dmsvcdem.sql — Builds the Support Vector Machine model demo
@?/rdbms/demo/dmsvodem.sql — Builds the One-Class Support Vector Machine model demo
@?/rdbms/demo/dmsvrdem.sql — Builds the Support Vector Regression model demo
@?/rdbms/demo/dmtxtfe.sql — Builds the Oracle Text Term Feature Extractor demo
@?/rdbms/demo/dmtxtnmf.sql — Builds the Text Mining Non-Negative Matrix Factorization model demo
@?/rdbms/demo/dmtxtsvm.sql — Builds the Text Mining Support Vector Machine model demo
## End of Data Mining Demo user (DM) setup and configuration for use of Oracle Demo Data
I have one of these (Yamaha PSR-S900 Arranger Keyboard Workstation) and after 7 years, the display started going defective – half of the screen was duplicated, lines running through the middle of the display.
This renders use of the keyboard relatively impossible (because there is still a composite video out that can be sent to a portable DVD/LCD player, which will still work for the purposes of reading what’s on the display – your patch selections, mixer settings, scoring, sheet music, file selections, etc.)
I hit eBay and found a few replacement display units for about $150 (shipped from China, but made in Japan), and figured it would be worth trying (after all, a new PSR-S950 still runs about $2000.)
The replacement looks like this:
There’s a single pair red/white power lead with a small white modular plug used to connect it to the high-voltage power daughterboard. (my plug had one fewer white connections, so I used a modeling knife to trim off the extra middle connector).
The display itself connects via a 10-wire flat ribbon connector that is press-fit into the LCD’s connector. These are the somewhat fragile, but when carefully removed, basically easily re-inserted into the same receptacle (similar to re-wiring a video game console mod.) In the photo, this receptacle is on the right-center side of the display.
Since I didn’t happen to have the service manual, we dive in with the screwdriver (all phillips-head). Flipping the keyboard over and laying it on a mattress (to avoid scratches,) you’ll find 14 3/4 inch panel screws, 4 slightly longer 1-1/2 inch panel screws used in the center holes of the keyboard, and about 24 larger headed 1-inch panel screws connecting 2 wood panels to the speakers and bottom frame. You get to remove ALL of these to get the bottom and top shells separated (just keep them in separate dishes/jars.)
The bottom assembly sort of resembles this view from a PSR-1500 (for general reference – the PSR-S900 is more symmetrical in design) The larger screws are connecting the bottom boards to the 2 speaker enclosures and then attaching the wood panels to the bottom plastic shell; the smaller screws go into those taller pyramid-looking tower holes in the bottom case:
This is an interior view of where the LCD is actually mounted (underneath the front panel; this view is of the bottom of the top half of the keyboard):
To access this view, you will be removing the 6 mounting screws holding the CPU board box (the large aluminum vented box sitting on top of the LCD panel area.) There are grounding wires on 3 sides of this box that are simply attached with more of the small panel screws. You can either remove the screws that attach the box to the mounting posts, or the screws that hold the posts to the top assembly (whichever ones you can access most easily.) The only connections I removed to access the LCD were the ethernet cable plugging into the CPU box, and the 2 white multi-wire connections that plug into the back-panel connector board (the one that has the USB plugs, video connectors and MIDI In/Out – it’s mounted to the top (silver) case assembly):
Once the CPU box is unmounted and moved aside (untaping the wires that are taped to the box), you can usually access the first 2 (of 4) screws mounting the LCD to the front panel (these are the 2 closest to the keyboard.) You can remove just the screws attaching the LCD to the aluminum mounts (you do not need to remove the mounts themselves). To access the other 2 (the ones towards the back panel), if you don’t have a right-angle screwdriver that can fit under the back-panel connector board (about 1-inch clearance), you can remove the 6 screws holding the connector board to the top case assembly (4 of these have bendable wire tie-downs on them; the other 2 seem to hold the mylar foil shielding tabs.) There is also a single screw that connects the coaxial video connector to the back panel that must also be removed to move the board.
Inside my particular model (which might have been an earlier build than the one my replacement LCD was designed to fit) the high voltage board was connected with a longer set of leads to the defective LCD. So I unmounted it, rotated it clockwise 90 degrees to move the connector closer to the LCD, and re-mounted it using a single screw to hold it in-place again.
After un-mounting the defective LCD, I removed the existing ribbon connector and before mounting the new one, re-attached the ribbon into the new LCD (drawing a line with a marker on the ribbon helps you remember how deep it was plugged in before). Plugged in the HV power lead and tested the power up to confirm the new display actually works (the first time, there were a bunch of alternating shadows, indicating I hadn’t seated the ribbon connector properly.)
4 screws back in to hold the LCD, 6+1 screws to re-mount the connector board, 6 more to remount the CPU box (and re-connect the ground wires, and 2 of them hold the box shut), then you can shut the case and replace all of those other screws you took out that hold the case together.
Nothing particularly technical – mostly a bunch of screws and tape. And about an hour and an eBay purchase later, the keyboard is back up and running fine.
If you Care a Little More, Things Happen. Bees can be dangerous. Always wear protective clothing when approaching or dealing with bees. Do not approach or handle bees without proper instruction and training.