Tag Archives: statistics

Top 7 Reasons Organizations Should Not Automatically Switch to Hosted Enterprise Technology

Cloud with No Symbol
Not Cloud?

A college education can make you think differently.  As I read the original article, the many times my Statistics professors pointed out that anyone can basically lie with numbers to make them support whichever position they want. This was equally true in a class I took on Mass Persuasion and Propaganda.

Thus I present this same article, with an inversion of the concluded statistical results of the IDG survey, with minor modifications to the explanations given to suit the results of the measures.  Respect given to the original author, Tori Ballantine, who is a product marketing lead at Hyland Cloud.  No offense is intended by this grammatical exercise in statistical results inversion.

Original Article:

Top 7 Reasons Manufacturers Should Host Enterprise Technology
https://www.mbtmag.com/article/2018/07/top-7-reasons-manufacturers-should-host-enterprise-technology

Top 7 Reasons Organizations Should Not Automatically Switch to Hosted or Cloud Enterprise Technology

As one of the leading industries that was an early adopter of process automation, manufacturing is often ahead of the curve when it comes to seeking ways to improve processes — yet still has work to do in the technology adoption realm. While the trend for cloud adoption is increasing over on-premises solutions overall, some organizations, including manufacturers, are hesitant to make the transition to the cloud.

There are countless compelling reasons to transition to hosted enterprise applications. According to a recent survey from IDG, IT leaders at companies with 250+ employees, from a wide range of industries and company sizes, agreed on seven areas where cloud computing should benefit their organizations. These included:

Disaster Recovery

Disasters, both natural and man-made, are inherently unpredictable. When the worst-case scenario happens, organizations need improved disaster recovery capabilities in place — including the economic resources to replicate content in multiple locations. According to the IDG survey, about 33 percent, of IT leaders did not find disaster recovery as the number one reason they would move, or have moved to hosted enterprise solutions. By switching to a hosted solution, about 1/3 of organizations could not get their crucial application running as soon as possible after an emergent situation, and are therefore unable to serve their customers.

Data Availability

IT leaders know that data and content are essential components of their daily business operations. In fact, according to the IDG research, 45 percent of survey participant listed data availability as the second leading limitation cited about cloud enterprise applications being unable to provide. Access to mission-critical information, when they need it, wherever they are, is essential for organizations to stay competitive and provide uninterrupted service. With no noticeable increase to uptime compared to on-premises applications, hosted solutions did not provide 24/7/365 data availability.

Cost Savings

It shouldn’t come as a surprise that the third most popular reason IT leaders seek cloud solutions is because of cost savings. Hosting in the cloud eliminates the need for upfront investment in hardware and the expense of maintaining and updated hosting infrastructure by shifting the cost basis to long-term operational costs. While hosting software solutions on-premises carries more than just risk; it carries a fair amount of operational costs. By hosting enterprise solutions in the cloud, organizations will reduce capital costs with a possible reduction in operating costs — including staffing, overtime, maintenance and physical security when centralized under a hosting provider.

Incident Response

The IDG survey found that 55 percent of IT professionals listed incident response as another area where cloud solutions provided no significant benefit over on-premises options. Large-scale systems can develop more efficient incident response capabilities, and improve incident response times compared to smaller, non-consolidated systems. As seconds tick by, compliance fines can increase along with end-user dissatisfaction. So having a quick incident response time is essential to reduce risk and ensure end-user satisfaction.

Security Expertise

The best providers that offer hosted solutions constantly evaluate and evolve their practices to protect customers’ data. This is crucial because up to 59 percent of IDG survey responders noted that security expertise as another leading reason they do not select cloud applications. Organizations with cloud-hosted applications could take advantage of the aggregated security expertise from their vendors to improve their own operations and make sure information is safe, but only by complying with externally-driven security standards that were either not enforceable due to application restrictions (legacy versioning, design constraints, third-party non-compliant architecture, et.al.) To ensure your content stays safe, it’s important to seek cloud providers with the right credentials — look for certifications such as SOC 1 and 2 or 3 audited, ISO 27001 and CSA STAR Registrant.

Geographical Disbursement

The IDG survey found that over 63 percent of IT professionals were not seeking geographical disbursement in where their data is stored. In the event of data unavailability in a local data center, having a copy of the data in a separate geographical area ensures performance and availability of the data sources, though resources to use the data may not be readily available as they are co-located in the local region of the primary data.

Expert Access

IT professionals seek hosted solutions because the best hosted software applications employ top-notch security professionals. Gaining access to these professionals’ insight helps ensure concerns are addressed and the software delivers on the organization’s needs.

In order to facilitate the best possible experience for your customers, it’s important to keep up with technology trends that give you the data and insights you need to provide quality service. For many firms, it means not only focusing on process automation on the manufacturing floor, but also within the internal processes driven by data. There’s a huge shift happening with how organizations choose to deploy software. In fact, according to a recent AIIM study, almost 25% of respondents from all industries are not seeking to deploy cloud software in any fashion. 60 percent of those surveyed plan to focus on a non-hybrid approach, focusing primarily on leveraging on-premises deployments, while 38 percent said they will deploy cloud solutions.

As noted in the seven areas above, the reasons for the lack of shift to selecting hosted enterprise applications are diverse and compelling. The cloud provides users with greater access to their information, when and where they need to access it — and doesn’t confine users to a on-premise data source. When combined with the other benefits of improved business continuity, cost savings, incident response, security expertise and expert access, organizations should carefully consider that their important information and content is more available and secure in the cloud.

 

Advertisements

Using Demo Data for Oracle Data Mining Tools

With the forthcoming (but already available) SQLDeveloper 4.1 edition, an improved version of the Oracle Data Miner tools is incorporated into the SQLDeveloper console.  However, I found that there were a number of steps needed to actually use this new data modeling product other than just responding ‘Yes’ to the “Do you wish to enable the Data Miner Repository on this database?” prompt.

Here’s what I ended up doing to get things up and running (so that I could play with data modeling and visualization using Excel and the new SQLDeveloper DM extensions.)

#In this case, I’m adding back the demonstration data (i.e. EMP, DEPTNO type tables; the SH, OE, HR, et.al. schemas) into an existing R12 e-Business Suite (12.1.3) instance.

# Installing the Oracle Demo data in an R12 instance.

# Use the runInstaller from the R12 $ORACLE_HOME
cd $ORACLE_HOME/oui/bin
export DISPLAY=<workstation IP>:0.0
./runInstaller

# Choose the source products.xml from the staging area – Download and stage the DB Examples CD from OTN
/mnt/nfs/database/11203/examples/stage/products.xml

# Complete the OUI installation through [Finish]

cd $ORACLE_HOME/demo/schema
mkdir -p $ORACLE_HOME/demo/schema/log
echo $ORACLE_HOME/demo/schema/log/     ## used to respond to the Log Directory prompt during mksample.sql

sqlplus “/ as sysdba”
— will need passwords for: SYS/SYSTEM and APPS (used for all of the demo schemas, some of which pre-exist such as, HR, OE (PM, IX, SH and BI were okay for 12.1.3).

— ## Be sure to comment out any DROP USER <HR, OE, etc.) commands in this script (or you will be restoring your EBS instance from a backup because it just dropped your Module schema tables…) ##
— They look like this:
/*
mksample.sql:– DROP USER hr CASCADE;
mksample.sql:– DROP USER oe CASCADE;
mksample.sql:DROP USER pm CASCADE;
mksample.sql:DROP USER ix CASCADE;
mksample.sql:DROP USER sh CASCADE;
mksample.sql:DROP USER bi CASCADE;
*/

–Similarly – if/when you decide you no longer need the data – do NOT just use the $ORACLE_HOME/demo/schema/drop_sch.sql script
–or you just dropped your HR/OE/BI EBS schemas; don’t do that.
/*
drop_sch.sql:PROMPT Dropping Sample Schemas
drop_sch.sql:– DROP USER hr CASCADE;
drop_sch.sql:– DROP USER oe CASCADE;
drop_sch.sql:DROP USER pm CASCADE;
drop_sch.sql:DROP USER ix CASCADE;
drop_sch.sql:DROP USER sh CASCADE;
drop_sch.sql:DROP USER bi CASCADE;

order_entry/oe_main.sql:– Dropping the user with all its objects
order_entry/oe_main.sql:– DROP USER oe CASCADE;
order_entry/oe_main.sql:– ALTER USER oe DEFAULT TABLESPACE &tbs QUOTA UNLIMITED ON &tbs;

*/
— in this instance the $APPS_PW is synchronized to all application module schemas (i.e. AR, HR, GL, etc.)
— log directory would be the actual path from echo $ORACLE_HOME/demo/schema/log/ (including the trailing slash)
SQL> @mksample.sql

# to additionally create the Data Mining user (DM in this case)

create user &&dmuser identified by &&dmuserpwd
default tablespace &&usertblspc
temporary tablespace &&temptblspc
quota unlimited on &&usertblspc;

GRANT CREATE JOB TO &&dmuser;
GRANT CREATE MINING MODEL TO &&dmuser;       — required for creating models
GRANT CREATE PROCEDURE TO &&dmuser;
GRANT CREATE SEQUENCE TO &&dmuser;
GRANT CREATE SESSION TO &&dmuser;
GRANT CREATE SYNONYM TO &&dmuser;
GRANT CREATE TABLE TO &&dmuser;
GRANT CREATE TYPE TO &&dmuser;
GRANT CREATE VIEW TO &&dmuser;
GRANT EXECUTE ON ctxsys.ctx_ddl TO &&dmuser;
GRANT CREATE ANY DIRECTORY TO &&dmuser;
— Grant the SH Demo table and package objects to the DM user
@?/rdbms/demo/dmshgrants.sql &&dmuser

connect &&dmuser/&&dmuserpwd
— Create the Data Mining Views against the  SH Demo table and package objects
@?/rdbms/demo/dmsh.sql

@?/rdbms/demo/dmabdemo.sql — Builds the Adaptive Baynes Model demo
@?/rdbms/demo/dmaidemo.sql — Builds the Attribute Importance demo
@?/rdbms/demo/dmardemo.sql — Builds the Association Rules demo
@?/rdbms/demo/dmdtdemo.sql — Builds the Decision Tree demo
@?/rdbms/demo/dmdtxvlddemo.sql — Builds the Cross Validation demo
@?/rdbms/demo/dmglcdem.sql — Builds the Generalized Linear model demo
@?/rdbms/demo/dmglrdem.sql — Builds the General Linear Regression model demo
@?/rdbms/demo/dmhpdemo.sql — not a Data Mining program – Hierarchical Profiler
@?/rdbms/demo/dmkmdemo.sql — Builds the K-Means Clustering model demo
@?/rdbms/demo/dmnbdemo.sql — Builds the Naive Baynes Model data
@?/rdbms/demo/dmnmdemo.sql — Builds the Non-negative Matrix Factorization model
@?/rdbms/demo/dmocdemo.sql — Builds the O-Cluster model Demo
@?/rdbms/demo/dmsvcdem.sql — Builds the Support Vector Machine model demo
@?/rdbms/demo/dmsvodem.sql — Builds the One-Class Support Vector Machine model demo
@?/rdbms/demo/dmsvrdem.sql — Builds the Support Vector Regression model demo
@?/rdbms/demo/dmtxtfe.sql — Builds the Oracle Text Term Feature Extractor demo
@?/rdbms/demo/dmtxtnmf.sql — Builds the Text Mining Non-Negative Matrix Factorization model demo
@?/rdbms/demo/dmtxtsvm.sql — Builds the Text Mining Support Vector Machine model demo

## End of Data Mining Demo user (DM) setup and configuration for use of Oracle Demo Data

ORA-06512 ‘DBSNMP.BSLN_INTERNAL’

Periodic Stats Gathering Fails in 11gR2 Oracle Database – found in alert.log: ORA-06512: at “DBSNMP.BSLN_INTERNAL”, line 2073

The Fix:

SQL> @?/rdbms/admin/catnsnmp.sql

SQL> @?/rdbms/admin/catsnmp.sql

Dbhk's Blog

Symptoms

1.  BSLN_MAINTAIN_STATS_JOB failed at every SUNDAY

2.  Alert messages got the following

Errors in file /opt/oracle/diag/rdbms/vstbpro/vstbpro1/trace/vstbpro1_j000_7796.trc:
ORA-12012: error on auto execute of job 11762
ORA-06502: PL/SQL: numeric or value error
ORA-06512: at “DBSNMP.BSLN_INTERNAL”, line 2073

3. Details of the trace file

Trace file /opt/oracle/diag/rdbms/vstbpro/vstbpro1/trace/vstbpro1_j000_7796.trc
Oracle Database 11g Release 11.1.0.7.0 – 64bit Production
With the Real Application Clusters option
ORACLE_HOME = /opt/oracle/product/11g
System name:    SunOS
Node name:      wvpdb09
Release:        5.10
Version:        Generic_138888-01
Machine:        sun4u
Instance name: vstbpro1
Redo thread mounted by this instance: 1
Oracle process number: 73
Unix process pid: 7796, image: oracle@wvpdb09 (J000)

*** 2010-04-11 15:00:04.639
*** SESSION ID:(1041.32020) 2010-04-11 15:00:04.639
*** CLIENT ID:() 2010-04-11 15:00:04.639
*** SERVICE NAME:(SYS$USERS) 2010-04-11 15:00:04.639
*** MODULE NAME:(DBMS_SCHEDULER) 2010-04-11 15:00:04.639
*** ACTION NAME:(BSLN_MAINTAIN_STATS_JOB) 2010-04-11 15:00:04.639

ORA-12012: error on auto execute of job 11762
ORA-06502: PL/SQL: numeric or value error
ORA-06512: at “DBSNMP.BSLN_INTERNAL”, line 2073
ORA-06512: at line 1

Cause

Table DBSNMP.BSLN_BASELINES contains…

View original post 35 more words

12c Histograms pt.2

Oracle Scratchpad

In part 2 of this mini-series I’ll be describing the new mechanism for the simple frequency histogram and the logic of the Top-N frequency histogram. In part 3 I’ll be looking at the new hybrid histogram.

You need to know about the approximate NDV (number of distinct values) before you start examining the 12c implementation of the frequency and top-frequency histograms – but there’s a thumbnail sketch at the end of the posting if you need a quick reminder.

View original post 1,155 more words