Uniform distribution of load
Hi Gurus
I want to have uniform distribution of load against available capacity, How it is possible
For Example-- I have 100 hrs of capacity for a week (Daily 20 hrs) and i have only 40 hrs load of production order.
Scheduling of production order is done in such way that in 2 days production order is completed.
My requirement is that system should distribute load evenly on all days of week i.e. 8 hours per day
How it is done?
Hi
In this case your need to reduce the percentage of utilization in capacity view of the workcenters or define new load situation like normal production. .... under production.
krishna
Similar Messages
-
Even logon distribution using load balancing
We are trying to scale our system to distribute logons across multiple app servers. We have setup Logon groups and have set MSHOST in our connection configurations but since all logon's occur at roughly the same time, they all end up going to the same app server. We have 40 connections and ideally would want 20 to go to server A and 20 to go to server B, so that when each logon has work to do, the load gets evenly distributed.
We have tried modifying the threshold value for each app server in the logon group but that has had no impact.
Any suggestions?
Thanks
TimTim
Logon group should work fine if there is difference in login time by at least 5 minutes.
SAP calculates the best server for dialog logon by determining the quality of a particular server. It gives weightage to number of users and response time of dialog in a ratio 5:1 and then calculates the quality of the server. This information gets updated in every 5 mins. User at a particular time are assigned to the best server ( according to quality).
When all the users are moving to the same server, you can check what is the quality of a particular server.
SMLG-->F5 ( Goto>Load Distribution) and SMLG--->F6(Message server status area). Check if the current logon instance is changing with time. If that is changing then your logon groups working...If not then then raise a OSS message to SAP.
Even if your logon grp working but you are unable to get the advantage,
you can set up multiple logon group with different app server. Like GRP1 will have App1, APP2 then GRP2 will have App3, App4 and so on and then ask your user to use different logon grp.
One more thing, what is the auto_logout setting is your system? if users frequently login and then after work log off they can get the best effor in terms of logon group.
Waiting for further details from you
Edited by: Anindya Bose on Aug 11, 2009 10:44 AM -
Uniform random number generator
anyone of u know how to generate UNIFORM RANDOM NUMBER GENERATOR that has serious faults ?
jwenting wrote:
uj_ is right (never thought I'd say that).
ANYTHING "uniform" is by definition not random.Well, the correct term is "random numbers from a uniform distribution". And there certainly exist such generators (athough they're pseudo-random of course).
I lost my temper because I've spent considerable time helping sosys only to realize he's a notorious cheater. Sosys claims to be a college student attending a CS class. With the knowledge, or rather lack thereof, he's displaying he must've cheated systematically for a long long time. I'm quite liberate at helping people with their homework but I won't help a leech become a programmer. -
Random generator for given distribution function
I need to write a random generator, not for normal distribution only but for any distribution function.
That function can be a black-box distribution function, or that can be a table [ x | f(x) ].
For those not familiar with this term, distribution function is f(x) that returns the posibility of that the radmon number is lesser than x. f(-inf) = 0, f(+inf) = 1, x > y => f(x) >= f(y).I don't have my stats text books with me, but Google is your friend! :-)
This looks like what we coverd in stats class:
http://www.mathworks.com/access/helpdesk/help/toolbox/stats/prob_di7.shtml
>
Inversion
The inversion method works due to a fundamental theorem that relates the uniform distribution to other continuous distributions.
If F is a continuous distribution with inverse F^-1, and U is a uniform random number, then F^-1(U) has distribution F.
So, you can generate a random number from a distribution by applying the inverse function for that distribution to a uniform random number. Unfortunately, this approach is usually not the most efficient.
How do you get a random number from a uniform distribution? Easy: with java.util.Random, or some other java math package (I like http://hoschek.home.cern.ch/hoschek/colt/index.htm).
The first link details some other methods, as will most stats text books. -
Distribution Type in Numbers generated by Random function
Hello,
I am using the built-in Random() function in TestStand for generating random numbers with seed value of 0.
I was wondering about it's distribution. Does the built-in function use Normal or Uniform distribution?
ThanksUniform.
-
Please answer these questions.....Urgent
Q You are using Data Guard to ensure high availability. The directory structures on the primary and the standby hosts are different.
Referring to the scenario above, what initialization parameter do you set up during configuration of the standby database?
db_convert_dir_name
db_convert_file_name
db_dir_name_convert
db_directory_convert
db_file_name_convert
Oracle 9i Administration, Question 1 of 12
Q What facility does Oracle provide to detect chained and migrated rows after the proper tables have been created?
The RDBMS cannot detect this. It must use regular export and import with compress=y to remove chained and migrated rows as part of the regular database.
The UTLCHAIN utility
The DBMS_REPAIR package
The ANALYZE command with the LIST CHAINED ROWS option
The DBMS_MIG_CHAIN built-in package
Q While doing an export, the following is encountered:
ORA-1628 ... max # extents ... reached for rollback segment ..
Referring to the scenario above, what do you do differently so that the export is resumed even after getting the space allocation error?
Use the RESUMABLE=Y option for the export.
Run the export with the AUTO_ROLLBACK_EXTEND=Y option.
Increase the rollback segment extents before running the export.
Use THE RESUME=Y option for the export.
Monitor the rollback segment usage while the export is running and increase it if it appears to be running out of space.
Q
The DBCA (Database Configuration Assistant) prompts the installer to enter the password for which default users?
SYS and SYSTEM
OSDBA and INTERNAL
SYSOPER and INTERNAL
SYS and INTERNAL
SYSTEM and SYSDBA
Q You are designing the physical database for an application that stores dates and times. This will be accessed by users from all over the world in different time zones. Each user needs to see the time in his or her time zone.
Referring to the scenario above, what Oracle data type do you use to facilitate this requirement?
DATE
TIMESTAMP WITH TIME ZONE
TIMESTAMP
DATETIME
TIMESTAMP WITH LOCAL TIME ZONE
Q Which one of the following conditions prevents you from redefining a table online?
The table has a composite primary key.
The table is partitioned by range.
The table's organization is index-organized.
The table has materialized views defined on it.
The table contains columns of data type LOB.
Q An Oracle database administrator is upgrading from Oracle 8.1.7 to Oracle 9i.
Referring to the scenario above, which one of the following scripts does the Oracle database administrator run after verifying all steps in the upgrade checklist?
u8.1.7.sql
u81700.sql
u0900020.sql
u0801070.sql
u0817000.sql
Q What command do you use to drop a temporary tablespace and the associated OS files?
ALTER DATABASE TEMPFILE '/data/oracle/temp01.dbf' DROP;
ALTER DATABASE DATAFILE '/data/oracle/temp01.dbf' DROP;
ALTER DATABASE TEMPFILE '/data/oracle/temp01.dbf' DROP INCLUDING DATAFILES;
ALTER DATABASE DATAFILE '/data/oracle/temp01.dbf' DROP CASCADE;
ALTER DATABASE DATAFILE '/data/oracle/temp01.dbf' DROP INCLUDING CONTEN
Q You wish to use a graphical interface to manage database locks and to identify blocking locks.
Referring to the scenario above, what DBA product does Oracle offer that provides this functionality?
Oracle Expert, a tool in the Oracle Enterprise Manager product
Lock Manager, a tool in the base Oracle Enterprise Manager (OEM) product, as well as the console
Lock Manager, a tool in Oracle Enterprise Manager's Tuning Pack
The console of Oracle Enterprise Manager
Viewing the Lock Manager charts of the Oracle Performance Manager, a tool in the Diagnostics Pack add on
Q CREATE DATABASE abc
MAXLOGFILES 5
MAXLOGMEMBERS 5
MAXDATAFILES 20
MAXLOGHISTORY 100
Referring to the code segment above, how do you change the MAX parameters shown?
They can be changed using an ALTER SYSTEM command, but the database must be in the NOMOUNT state.
The MAX parameters cannot be changed without exporting the entire database, re-creating it, and importing.
They can be changed using an ALTER SYSTEM command while the database is open.
They can be changed in the init.ora file, but the database must be restarted for the values to take effect.
They cannot be changed unless you re-create your control file
Q You need to change the archivelog mode of an Oracle database.
Referring to the scenario above, what steps do you take before actually changing the archivelog mode?
Execute the archive log list command
Start up the instance and mount the database but do not open it.
Start up the instance and mount and open the database in restricted mode.
Kill all user sessions to ensure that there is no database activity that might trigger redolog activity.
Take all tablespaces offline
Q You are experiencing performance problems due to network traffic. One way to tune this is by setting the SDU size.
Referring to the scenario above, why do you change the SDU size?
A high-speed network is available where the data transmission effect is negligible.
The application can be tuned to account for the delays.
The requests to the database return small amounts of data as in an OLTP system.
The data coming back from the server are fragmented into several packets.
A large number of users are logged on concurrently to the system.
Q When interpreting statistics from the v$sysstat, what factor do you need to keep in mind that can skew your statistics?
Choice 1 The statistics are static and must be updated by running the analyze command to include the most recent activity.
Choice 2 The statistics are only valid as a point in time snapshot of activity.
Choice 3 The statistics gathered by v$sysstat include database startup activities and database activity that initially populates the database buffer cache and shared pool.
Choice 4 The statistics do not include administrative users.
Choice 5 The statistics gathered are based on individual sessions, so you must interpret them based on the activity and application in which the user was involved at the time you pull the statistics.
Q When interpreting statistics from the v$sysstat, what factor do you need to keep in mind that can skew your statistics?
Choice 1 The statistics are static and must be updated by running the analyze command to include the most recent activity.
Choice 2 The statistics are only valid as a point in time snapshot of activity.
Choice 3 The statistics gathered by v$sysstat include database startup activities and database activity that initially populates the database buffer cache and shared pool.
Choice 4 The statistics do not include administrative users.
Choice 5 The statistics gathered are based on individual sessions, so you must interpret them based on the activity and application in which the user was involved at the time you pull the statistics.
Q You want to shut down the database, but you do not want client connections to lose any non-committed work. You also do not want to wait for every open session to disconnect.
Referring to the scenario above, what method do you use to shut down the database?
Choice 1 Shutdown abort
Choice 2 Shutdown immediate
Choice 3 Shutdown transactional
Choice 4 Shutdown restricted sessions
Choice 5 Shutdown normal
Q What step or steps do you take to enable Automatic Undo Management (AUM)?
Choice 1 Create the UNDO tablespace, then ALTER SYSTEM SET AUTO_UNDO.
Choice 2 Use ALTER SYSTEM SET AUTO_UNDO; parameter.
Choice 3 Add UNDO_MANAGEMENT=AUTO parameter to init.ora, stop/start the database.
Choice 4 Add UNDO_AUTO to parameter to init.ora, stop/start the database, and create the UNDO tablespace.
Choice 5 Add UNDO_MANAGEMENT=AUTO parameter to init.ora, create the UNDO tablespace, stop/start the database
AUTOMATIC UNDO PARAMETER SETTINGS.
Q What Oracle 9i feature allows the database administrator to create tablespaces, datafiles, and log groups WITHOUT specifying physical filenames?
Choice 1 Dynamic SGA
Choice 2 Advanced Replication
Choice 3 Data Guard
Choice 4 Oracle Managed Files
Choice 5 External Tables
Q What Oracle 9i feature allows the database administrator to create tablespaces, datafiles, and log groups WITHOUT specifying physical filenames?
Choice 1 Dynamic SGA
Choice 2 Advanced Replication
Choice 3 Data Guard
Choice 4 Oracle Managed Files
Choice 5 External Tables
Q What package is used to specify audit requirements for a given table?
Choice 1 DBMS_TRACE
Choice 2 DBMS_FGA
Choice 3 DBMS_AUDIT
Choice 4 DBMS_POLICY
Choice 5 DBMS_OBJECT_AUDIT
Q What facility does Oracle provide to detect chained and migrated rows after the proper tables have been created?
Choice 1 The ANALYZE command with the LIST CHAINED ROWS option
Choice 2 The RDBMS cannot detect this. It must use regular export and import with compress=y to remove chained and migrated rows as part of the regular database.
Choice 3 The DBMS_MIG_CHAIN built-in package
Choice 4 The DBMS_REPAIR package
Choice 5 The UTLCHAIN utility
Q What are the three functions of an undo segment?
Choice 1 Rolling back archived redo logs, database recovery, recording user trace information
Choice 2 The rollback segment has only one purpose, and that is to roll back transactions that are aborted.
Choice 3 Rolling back uncommitted transactions, maintaining read consistency, logging processed SQL statements
Choice 4 Rolling back transactions, maintaining read consistency, database recovery
Choice 5 Rolling back transactions, recording Data Manipulation Language (DML) statements processed against the database, recording Data Definition Language (DDL) statements processed against the database
Q Which one of the following describes locally managed tablespaces?
Choice 1 Tablespaces within a Recovery Manager (RMAN) repository
Choice 2 Tablespaces that are located on the primary server in a distributed database
Choice 3 Tablespaces that use bitmaps within their datafiles, rather than data dictionaries, to manage their extents
Choice 4 Tablespaces that are managed via object tables stored in the system tablespace
Choice 5 External tablespaces that are managed locally within an administrative repository serving an Oracle distributed database or Oracle Parallel Server
Q The schema in a database you are administering has a very complex and non-user friendly table and column naming system. You need a simplified schema interface to query and on which to report.
Which one of the following mechanisms do you use to meet the requirement stated in the above scenario?
Choice 1 Synonym
Choice 2 Stored procedure
Choice 3 Labels
Choice 4 Trigger
Choice 5
View
Q You need to change the archivelog mode of an Oracle database.
Referring to the scenario above, what steps do you take before actually changing the archivelog mode?
Choice 1 Start up the instance and mount the database but do not open it.
Choice 2 Execute the archive log list command
Choice 3 Kill all user sessions to ensure that there is no database activity that might trigger redolog activity.
Choice 4 Take all tablespaces offline.
Choice 5 Start up the instance and mount and open the database in restricted mode.
Q The Oracle Internet Directory debug log needs to be changed to show the following events information.
Given the Debug Event Types and their numeric values:
Starting and stopping of different threads. Process related. - 4
Detail level. Shows the spawned commands and the command-line arguments passed - 32
Operations being performed by configuration reader thread. Configuration refresh events. - 64
Actual configuration reading operations - 128
Operations being performed by scheduler thread in response to configuration refresh events, and so on - 256
What statement turns debug on for all of the above event types?
Choice 1 oidctl server=odisrv debug=4 debug=32 debug=64 debug=128 debug=256 start
Choice 2 oidctl server=odisrv debug="4,32,64,128,256" start
Choice 3 oidctl server=odisrv flags="debug=4 debug=32 debug=64 debug=128 debug=256" start
Choice 4 oidctl server=odisrv flags="debug=484" start
Choice 5 oidctl server=odisrv flags="debug=4,32,64,128,256" start
Q Which Data Guard mode has the lowest performance impact on the primary database?
Choice 1 Instant protection mode
Choice 2 Guaranteed protection mode
Choice 3 Rapid protection mode
Choice 4 Logfile protection mode
Choice 5 Delayed protection mode
Q In a DSS environment, the SALES data is kept for a rolling window of the past two years.
Referring to the scenario above, what type of partitioning do you use for this data?
Choice 1 Hash Partitioning
Choice 2 Range Partitioning
Choice 3 Equipartitioning
Choice 4 List Partitioning
Choice 5 Composite Partitioning
Q What are the three main areas of the SGA?
Choice 1 Log buffer, shared pool, database writer
Choice 2 Database buffer cache, shared pool, log buffer
Choice 3 Shared pool, SQL area, redo log buffer
Choice 4 Log writer, archive log, database buffer
Choice 5
Database buffer cache, log writer, shared pool
Q When performing full table scans, what happens to the blocks that are read into buffers?
Choice 1 They are put on the MRU end of the buffer list by default.
Choice 2 They are put on the MRU end of the buffer list if the NOCACHE clause was used while altering or creating the table.
Choice 3 They are read into the first free entry in the buffer list.
Choice 4 They are put on the LRU end of the buffer list if the CACHE clause was used while altering or creating the table.
Choice 5 They are put on the LRU end of the buffer list by default
Q Standard security policy is to force users to change their passwords the first time they log in to the Oracle database.
Referring to the scenario above, how do you enforce this policy?
Choice 1 Use the FORCE PASSWORD EXPIRE clause when the users are first created in the database.
Choice 2 Ask the users to follow the standards and trust them to do so.
Choice 3 Periodically compare the users' passwords with their initial password and generate a report of the users violating the standard.
Choice 4 Use the PASSWORD EXPIRE clause when the users are first created in the database.
Choice 5 Check the users' passwords after they first log in to see if they have changed it. If not, remind them to do so.
Q What object privilege is necessary for a foreign key constraint to be created and enforced on the referenced table?
Choice 1 References
Choice 2 Alter
Choice 3 Update
Choice 4 Resource
Choice 5 Select
Q What command do you use to drop a temporary tablespace and the associated OS files?
Choice 1 ALTER DATABASE DATAFILE '/data/oracle/temp01.dbf' DROP INCLUDING CONTENTS
Choice 2 ALTER DATABASE TEMPFILE '/data/oracle/temp01.dbf' DROP INCLUDING DATAFILES;
Choice 3 ALTER DATABASE TEMPFILE '/data/oracle/temp01.dbf' DROP;
Choice 4 ALTER DATABASE DATAFILE '/data/oracle/temp01.dbf' DROP;
Choice 5 ALTER DATABASE DATAFILE '/data/oracle/temp01.dbf' DROP CASCADE;
Q You need to implement a failover strategy using TAF. You do not have enough resources to ensure that your backup Oracle instance will be up and running in parallel with the primary.
Referring to the scenario above, what failover mode do you use?
Choice 1 FAILOVER_MODE=manual
Choice 2 FAILOVER_MODE=none
Choice 3 FAILOVER_MODE=auto
Choice 4 FAILOVER_MODE=basic
Choice 5 FAILOVER_MODE=preconnect
Q An Oracle database used for an OLTP application is encountering the "snapshot too old" error.
Referring to the scenario above, which database object or objects do you query in order to set the OPTIMAL parameter for the rollback segments?
Choice 1 V$ROLLNAME and V$ROLLSTAT
Choice 2 V$ROLLNAME
Choice 3 V$ROLLSTAT
Choice 4 DBA_ROLL and DBA_ROLLSTAT
Choice 5 DBA_ROLLBACK_SEG
QWhat are five background processes that must always be running in a functioning Oracle Instance?
Choice 1 SMON (system monitor), PMON (process monitor), RECO (recoverer process), ARCH (archive process), CKPT (checkpoint process)
Choice 2 DBW0 (database writer), SMON (system monitor), PMON (process monitor), LGWR (log writer), CKPT (checkpoint process)
Choice 3 DBW0 (database writer), SMON (system monitor), PMON (process monitor), D000 (Dispatcher process), CKPT (checkpoint process)
Choice 4 DBW0 (database writer), CKPT (checkpoint process), RECO (recoverer process), LGWR (log writer), ARCH (archive process)
Choice 5 DBW0 (database writer), LGWR (log writer), ARCH (archive process), CKPT (checkpoint process), RECO (recoverer process)
You have two large tables with thousands of rows. To select rows from the table_1, which are not referenced by an indexed common column (e.g. col_1) in table_2, you issue the following statement:
select * from table_1
where col_1 NOT in (select col_1 from table_2);
This statement is taking a very long time to return its result set.
Referring to the scenario above, which equivalent statement returns much faster?
Choice 1
select * from table_1
where not exists (select * from table_2)
Choice 2
select * from table_2
where col_1 not in (select col_1 from table_1)
Choice 3
select * from table_1
where col_1 in (select col_1 from table_2 where col_1 = table_1.col_1)
Choice 4
select * from table_1
where not exists (select 'x' from table_2 where col_1 = table_1.col_1)
Choice 5
select table_1.* from table_1, table_2
where table_1.col_1 = table_2.col_1 (+)
Performance is poor during peak transaction periods on a database you administer. You would like to view some statistics on areas such as LGWR (log writer) waits.
Referring to the scenario above, what performance view do you query to access these statistics?
Choice 1
DBA_CATALOG
Choice 2
V$SESS_IO
Choice 3
V$SYSSTAT
Choice 4
V$PQ_SYSSTAT
Choice 5
V$SQLAREA
You need to assess the performance of your shared pool at instance startup, but you cannot restart the database.
Referring to the scenario above, how do you empty your SGA?
Choice 1
Execute $ORACLE_HOME/bin/db_shpool_flush
Choice 2
ALTER SYSTEM FLUSH SHARED_POOL
Choice 3
ALTER SYSTEM CLEAR SHARED POOL
Choice 4
DELETE FROM SYS.V$SQLAREA
Choice 5
DELETE FROM SYS.V$SQLTEXT
You are reading the explain plan of a problem query and notice that full table scans are used with a HASH join.
Referring to the scenario above, in what instance is a HASH join beneficial?
Choice 1
When joining two small tables--neither having any primary keys or unique indexes
Choice 2
When no indexes are present
Choice 3
When using the parallel query option
Choice 4
When joining two tables where one table may be significantly larger than the other
Choice 5
Only when using the rule-based optimizer
An Oracle database administrator is upgrading from Oracle 8.1.7 to Oracle 9i.
Referring to the scenario above, which one of the following scripts does the Oracle database administrator run after verifying all steps in the upgrade checklist?
Choice 1
u0817000.sql
Choice 2
u0900020.sql
Choice 3
u8.1.7.sql
Choice 4
u81700.sql
Choice 5
u0801070.sql
You have a large On-Line Transaction Processing (OLTP) database running in archive log mode with two redo log groups that have two members each.
Referring to the above scenario, to avoid stalling during peak activity periods, which one of the following actions do you take?
Choice 1
Add a third member to each of the groups.
Choice 2
Increase your LOG_CHECKPOINT_INTERVAL setting.
Choice 3
Turn off archive logging.
Choice 4
Add a third redo log group.
Choice 5
Turn off redo log multiplexing
What object does a database administrator create to store precompiled summary data?
Choice 1
Replicated Table
Choice 2
Archive Log
Choice 3
Temporary Tablespace
Choice 4
Cached Table
Choice 5
Materialized View
Which one of the following statements do you execute in order to find the current default temporary tablespace?
Choice 1
SELECT property_name, property_value FROM v$database_properties
Choice 2
show parameter curr_default_temp_tablespace
Choice 3
SELECT property_name, property_value FROM all_database_properties
Choice 4
SELECT property_name, property_value FROM database_properties
Choice 5
SELECT property_name, property_value FROM dba_database_properties
In which one of the following situations do you use a bitmap index?
Choice 1
With column values that are guaranteed to be unique
Choice 2
With column values having a high cardinality
Choice 3
With column values having a consistently uniform distribution
Choice 4
With column values having a low cardinality
Choice 5
With column values having a non-uniform distribution
A table has more than two million rows and, if exported, will exceed 4 GB in size with data, indexes, and constraints. The UNIX you are using has a 2 GB limit on file sizes. This table needs to be backed up using Oracle EXPORT.
There are two ways this table can be exported and split into multiple files. One way is to use the UNIX pipe, split, and compress commands in conjunction with the Oracle EXPORT utility to generate multiple equally-sized files.
Referring to the scenario above, what is the other way that you can export and split into multiple files?
Choice 1
Export the data into one file and the index into another file.
Choice 2
Use a WHERE clause with the export to limit the number of rows returned.
Choice 3
Vertically partition the table into sizes of less than 2 GB and then export each partition as a separate file.
Choice 4
Specify the multiple files in the FILE parameter and specify the FILESIZE in the EXPORT parameter file.
Choice 5
Horizontally partition the table into sizes of less than 2 GB and then export each partition as a separate file.
Which one of the following statements describes the PASSWORD_GRACE_TIME profile setting?
Choice 1
It specifies the grace period, in days, for changing the password once expired.
Choice 2
It specifies the grace period, in days, for changing the password from the time it is initially set and the time the account is made active.
Choice 3
It specifies the grace period, in minutes, for changing the password once expired.
Choice 4
It specifies the grace period, in days, for changing the password after the first successful login after the password has expired.
Choice 5
It specifies the grace period, in hours, for changing the password once expired.
In OEM, what color and icon are associated with a warning?
Choice 1
Yellow hexagon
Choice 2
Yellow flag
Choice 3
Red flag
Choice 4
Gray flag
Choice 5
Red hexagon
What parameter in the SQLNET.ORA file specifies the order of the naming methods to be used?
Choice 1
NAMES.SEARCH_ORDER
Choice 2
NAMES.DOMAIN_HINTS
Choice 3
NAMES.DIRECTORY_PATH
Choice 4
NAMES.DOMAINS
Choice 5
NAMES.DIRECTORY
An Oracle 9i database instance has automatic undo management enabled. This allows you to use the Flashback Query feature of Oracle 9i.
Referring to the scenario above, what UNDO parameter needs to be set so that this feature allows consistent queries of data up to 90 days old?
Choice 1
UNDO_TABLESPACE
Choice 2
UNDO_TIMELIMIT
Choice 3
UNDO_MANAGEMENT
Choice 4
UNDO_FLASHBACKTO
Choice 5
UNDO_RETENTION
An Oracle 9i database instance has automatic undo management enabled. This allows you to use the Flashback Query feature of Oracle 9i.
Referring to the scenario above, what UNDO parameter needs to be set so that this feature allows consistent queries of data up to 90 days old?
Choice 1
UNDO_TABLESPACE
Choice 2
UNDO_TIMELIMIT
Choice 3
UNDO_MANAGEMENT
Choice 4
UNDO_FLASHBACKTO
Choice 5
UNDO_RETENTION
DB_BLOCK_SIZE=8192
DB_CACHE_SIZE=128M
DB_2K_CACHE_SIZE=64M
DB_4K_CACHE_SIZE=32M
DB_8K_CACHE_SIZE=16M
DB_16K_CACHE_SIZE=8M
Referring to the initialization parameter settings above, what is the size of the cache of standard block size buffers?
Choice 1
8 M
Choice 2
16 M
Choice 3
32 M
Choice 4
64 M
Choice 5
128 M
DB_CREATE_FILE_DEST='/u01/oradata/app01'
DB_CREATE_ONLINE_LOG_DEST_1='/u02/oradata/app01'
Referring to the sample code above, which one of the following statements is NOT correct?
Choice 1
Data files created with no location specified are created in the DB_CREATE_FILE_DEST directory.
Choice 2
Control files created with no location specified are created in the DB_CREATE_ONLINE_LOG_DEST_1 directory.
Choice 3
Redolog files created with no location specified are created in the DB_CREATE_ONLINE_LOG_DEST_1 directory.
Choice 4
Control files created with no location specified are created in the DB_CREATE_FILE_DEST directory.
Choice 5
Temp files created with no location specified are created in the DB_CREATE_FILE_DEST directory.
LogMiner GUI is a part of which one of the following?
Choice 1
Oracle Enterprise Manager
Choice 2
Oracle LogMiner Plug-In
Choice 3
Oracle Diagnostics Pack
Choice 4
Oracle Performance Tuning Pack
Choice 5
Oracle LogMiner StandAlone GUI
The schema in a database you are administering has a very complex and non-user friendly table and column naming system. You need a simplified schema interface to query and on which to report.
Which one of the following mechanisms do you use to meet the requirement stated in the above scenario?
Choice 1
View
Choice 2
Trigger
Choice 3
Stored procedure
Choice 4
Synonym
Choice 5
Labels
alter index gl.GL_JE_LINES_N1 rebuild
You determine that an index has too many extents and want to rebuild it to avoid fragmentation performance degradation.
When you issue the above scenario, where is the rebuilt index stored?
Choice 1
In the default tablespace for the login name you are using
Choice 2
You cannot rebuild an index. You must drop the existing index and re-create it using the create index statement.
Choice 3
In the system tablespace
Choice 4
In the same tablespace as it is currently stored
Choice 5
In the index tablespace respective to the data table on which the index is built
Which one of the following describes locally managed tablespaces?
Choice 1
Tablespaces within a Recovery Manager (RMAN) repository
Choice 2
External tablespaces that are managed locally within an administrative repository serving an Oracle distributed database or Oracle Parallel Server
Choice 3
Tablespaces that are located on the primary server in a distributed database
Choice 4
Tablespaces that use bitmaps within their datafiles, rather than data dictionaries, to manage their extents
Choice 5
Tablespaces that are managed via object tables stored in the system tablespace
Which method of database backup supports true incremental backups?
Choice 1
Export
Choice 2
Operating System backups
Choice 3
Oracle Enterprise Backup Utility
Choice 4
Incremental backups are not supported. You must use full or cumulative backups.
Choice 5
Recovery Manager
You are using Data Guard to ensure high availability. The directory structures on the primary and the standby hosts are different.
Referring to the scenario above, what initialization parameter do you set up during configuration of the standby database?
Choice 1
db_dir_name_convert
Choice 2
db_convert_dir_name
Choice 3
db_convert_file_name
Choice 4
db_directory_convert
Choice 5
db_file_name_convert
Tablespace APP_INDX is put in online backup mode when redo log 744 is current. When APP_INDX is taken out of online backup mode, redo log 757 is current.
Referring to the scenario above, if the backup is restored, what are the start and end redo logs used, in order, to perform a successful point-in-time recovery of APP_INDX?
Choice 1
Start Redo Log 744, End Redo Log 757
Choice 2
Start Redo Log 743, End Redo Log 756
Choice 3
Start Redo Log 745, End Redo Log 756
Choice 4
Start Redo Log 744, End Redo Log 756
Choice 5
Start Redo Log 743, End Redo Log 757
You want to make new data entered or changed in a table adhere to a given integrity constraint, but data exist in the table that violates the constraint.
Referring to the scenario above, what do you do?
Choice 1
Use an enabled novalidate constraint.
Choice 2
Use an enabled validate constraint.
Choice 3
Use a deferred constraint.
Choice 4
Use a disabled constraint.
Choice 5
You cannot enforce this type of constraint
In Oracle 9i, the connect internal command has been discontinued.
Referring to the text above, how do you achieve a privileged connection in Oracle 9i?
Choice 1
CONNECT <username> AS SYSOPER where username has DBA privileges.
Choice 2
CONNECT <username> as SYSDBA.
Choice 3
Connect using Enterprise Manager.
Choice 4
CONNECT sys.
Choice 5
Use CONNECT <username> as normal but include the user in the external password file.
How many partitions can a table have?
Choice 1
64
Choice 2
255
Choice 3
1,024
Choice 4
65,535
Choice 5
Unlimited
In Cache Fusion, when does a request by one process for a resource owned by another process fail?
Choice 1
When a null mode resource request is made for a resource already owned in exclusive mode by another process
Choice 2
When a shared mode resource request is made for a resource already owned in shared mode by another process
Choice 3
When a shared mode resource request is made for a resource already owned in null mode by another process
Choice 4
When an exclusive mode resource request is made for a resource already owned in null mode by another process
Choice 5
When an exclusive mode resource request is made for a resource already owned in shared mode by another process
The Oracle Internet Directory debug log needs to be changed to show the following events information.
Given the Debug Event Types and their numeric values:
Starting and stopping of different threads. Process related. - 4
Detail level. Shows the spawned commands and the command-line arguments passed - 32
Operations being performed by configuration reader thread. Configuration refresh events. - 64
Actual configuration reading operations - 128
Operations being performed by scheduler thread in response to configuration refresh events, and so on - 256
What statement turns debug on for all of the above event types?
Choice 1
oidctl server=odisrv flags="debug=4 debug=32 debug=64 debug=128 debug=256" start
Choice 2
oidctl server=odisrv debug="4,32,64,128,256" start
Choice 3
oidctl server=odisrv flags="debug=4,32,64,128,256" start
Choice 4
oidctl server=odisrv flags="debug=484" start
Choice 5
oidctl server=odisrv debug=4 debug=32 debug=64 debug=128 debug=256 start
A new OFA-compliant database is being installed using the Oracle installer. The mount point being used is /u02.
Referring to the scenario above, what is the default value for ORACLE_BASE?
Choice 1
/usr/app/oracle
Choice 2
/u02/oracle
Choice 3
/u02/app/oracle
Choice 4
/u01/app/oracle
Choice 5
/u02/oracle_base
You need to start the Connection Manager Gateway and the Connections Admin processes.
Referring to the scenario above, what command do you execute?
Choice 1
CMCTL START CM
Choice 2
CMCTL START CMADMIN
Choice 3
CMCTL START CMAN
Choice 4
CMCTL START CMGW
Choice 5
CMCTL START CMGW CMADM
When performing full table scans, what happens to the blocks that are read into buffers?
Choice 1
They are read into the first free entry in the buffer list.
Choice 2
They are put on the MRU end of the buffer list if the NOCACHE clause was used while altering or creating the table.
Choice 3
They are put on the LRU end of the buffer list if the CACHE clause was used while altering or creating the table.
Choice 4
They are put on the LRU end of the buffer list by default.
Choice 5
They are put on the MRU end of the buffer list by default.
You wish to take advantage of the Oracle datatypes, but you need to convert your existing LONG or LONG RAW columns to Character Large Object (CLOB) and Binary Large Object (BLOB) datatypes.
Referring to the scenario above, what is the quickest method to use to perform this conversion?
Choice 1
Use the to_lob function when selecting data from the existing table into a new table.
Choice 2
Use the ALTER TABLE statement and MODIFY the column to the new LOB datatype.
Choice 3
You must export the existing data to external files and then re-import them as BFILE external LOBS.
Choice 4
Create a new table with the same columns but with the LONG or LONG RAW column changed to a CLOB or BLOB type. The next step is to INSERT INTO newtable select * from oldtable.
Choice 5
LONG and LONG RAW datatypes are not compatible with LOBS and cannot be converted within the Oracle database.
You need to redefine the JOURNAL table in the stress test environment. You want to check first to see if it is possible to redefine this table online.
Referring to the scenario above, what statement do you execute that checks whether or not the JOURNAL table can be redefined online if you are connected as the table owner?
Choice 1
Execute DBMS_REDEFINITION.CHECK_TABLE_REDEF(USER,'JOURNAL');
Choice 2
Execute DBMS_REDEFINITION.VERIFY_REDEF_TABLE(USER,'JOURNAL');
Choice 3
Execute DBMS_REDEFINITION.CAN_REDEF_TABLE(USER,'JOURNAL');
Choice 4
Execute DBMS_REDEFINITION.START_REDEF_TABLE(USER,'JOURNAL');
Choice 5
Execute DBMS_REDEFINITION.SYNC_INTERIM_TABLE(USER,'JOURNAL');
An Oracle 9i database instance has automatic undo management enabled. This allows you to use the Flashback Query feature of Oracle 9i.
Referring to the scenario above, what UNDO parameter needs to be set so that this feature allows consistent queries of data up to 90 days old?
Choice 1
UNDO_TIMELIMIT
Choice 2
UNDO_MANAGEMENT
Choice 3
UNDO_RETENTION
Choice 4
UNDO_TABLESPACE
Choice 5
UNDO_FLASHBACKTO
Which one of the following procedures is used for the extraction of the LogMiner dictionary?
Choice 1
DBMS_LOGMNR_D.EXTRACT
Choice 2
DBMS_LOGMNR.BUILD
Choice 3
DBMS_LOGMINER_D.BUILD
Choice 4
DBMS_LOGMNR_D.BUILD_DICT
Choice 5
DBMS_LOGMNR_D.BUILD
set pause on;
column sql_text format a35;
select sid, osuser, username, sql_text
from v$session a, v$sqlarea b
where a.sql_address=b.address
and a.sql_hash_value=b.hash_value
Why is the SQL*Plus sample code segment above used?
Choice 1
To view full text search queries by issuing user
Choice 2
To list all operating system users connected to the database
Choice 3
To view SQL statements issued by connected users
Choice 4
To detect deadlocks
Choice 5
To view paused database sessions
When dealing with very large tables in which the size greatly exceeds the size of the System Global Area (SGA) data block buffer cache, which one of the following operations must be avoided?
Choice 1
Group operations
Choice 2
Aggregates
Choice 3
Index range scans
Choice 4
Multi-table joins
Choice 5
Full table scans
You are reading the explain plan of a problem query and notice that full table scans are used with a HASH join.
Referring to the scenario above, in what instance is a HASH join beneficial?
Choice 1
Only when using the rule-based optimizer
Choice 2
When joining two small tables--neither having any primary keys or unique indexes
Choice 3
When no indexes are present
Choice 4
When joining two tables where one table may be significantly larger than the other
Choice 5
When using the parallel query option
Performance is poor during peak transaction periods on a database you administer. You would like to view some statistics on areas such as LGWR (log writer) waits.
Referring to the scenario above, what performance view do you query to access these statistics?
Choice 1
V$SQLAREA
Choice 2
V$SYSSTAT
Choice 3
V$SESS_IO
Choice 4
V$PQ_SYSSTAT
Choice 5
DBA_CATALOG
What security feature allows the database administrator to monitor successful and unsuccessful attempts to access data?
Choice 1
Autotrace
Choice 2
Fine-Grained Auditing
Choice 3
Password auditing
Choice 4
sql_trace
Choice 5
tkprof
You need to configure a default domain that is automatically appended to any unqualified net service name.
What Oracle-provided network configuration tool do you use to accomplish the above task?
Choice 1
Oracle Names Control Utility
Choice 2
Configuration File Utility
Choice 3
Oracle Network Configuration Assistant
Choice 4
Listener Control Utility
Choice 5
Oracle Net Manager
You are experiencing performance problems due to network traffic. One way to tune this is by setting the SDU size.
Referring to the scenario above, why do you change the SDU size?
Choice 1
The requests to the database return small amounts of data as in an OLTP system.
Choice 2
The application can be tuned to account for the delays.
Choice 3
The data coming back from the server are fragmented into several packets.
Choice 4
A large number of users are logged on concurrently to the system.
Choice 5
A high-speed network is available where the data transmission effect is negligible.
You have partitioned the table ORDER on the ORDERID column using range partitioning. You want to create a locally partitioned index on this table. You also want this index to be unique.
Referring to the scenario above, what is required for the creation of this unique locally partitioned index?
Choice 1
A unique partitioned index on a table cannot be local.
Choice 2
There can be only one unique locally partitioned index on the table.
Choice 3
The index has to be equipartitioned.
Choice 4
The table's primary key columns should be included in the index key.
Choice 5
The ORDERID column has to be part of the index's key.
You have a large On-Line Transaction Processing (OLTP) database running in archive log mode with two redo log groups that have two members each.
Referring to the above scenario, to avoid stalling during peak activity periods, which one of the following actions do you take?
Choice 1
Turn off redo log multiplexing.
Choice 2
Increase your LOG_CHECKPOINT_INTERVAL setting.
Choice 3
Add a third member to each of the groups.
Choice 4
Add a third redo log group.
Choice 5 Turn off archive logging
When transporting a tablespace, the tablespace needs to be self-contained.
Referring to the scenario above, in which one of the following is the tablespace self-contained?
Choice 1 A referential integrity constraint points to a table across a set boundary.
Choice 2 A partitioned table is partially contained in the tablespace.
Choice 3 An index inside the tablespace is for a table outside of the tablespace.
Choice 4 A corresponding index for a table is outside of the tablespace.
Choice 5 A table inside the tablespace contains a LOB column that points to LOBs outside the tablespace.
You have experienced a database failure requiring a full database restore. Downtime is extremely costly, as is any form of data loss. You run the database in archive log mode and have a full database backup from three days ago. You have a database export from last night. You are not running Oracle Parallel Server (OPS).
Referring to the above scenario, how do you minimize downtime and data loss?
Choice 1 Import the data from the export using direct-path loading.
Choice 2 Create a standby database and activate it.
Choice 3 Perform a restore of necessary files and use parallel recovery operations to speed the application of redo entries.
Choice 4 Conduct a full database restore and bring the database back online immediately. Apply redo logs during a future maintenance window.
Choice 5 Perform a restore and issue a recover database command
You have two large tables with thousands of rows. To select rows from the table_1, which are not referenced by an indexed common column (e.g. col_1) in table_2, you issue the following statement:
select * from table_1
where col_1 NOT in (select col_1 from table_2);
This statement is taking a very long time to return its result set.
Referring to the scenario above, which equivalent statement returns much faster?
Choice 1 select * from table_1
where col_1 in (select col_1 from table_2 where col_1 = table_1.col_1)
Choice 2 select * from table_2
where col_1 not in (select col_1 from table_1)
Choice 3 select * from table_1
where not exists (select 'x' from table_2 where col_1 = table_1.col_1)
Choice 4 select table_1.* from table_1, table_2
where table_1.col_1 = table_2.col_1 (+)
Choice 5 select * from table_1
Which one of the following initialization parameters is obsolete in Oracle 9i?
Choice 1 LOG_ARCHIVE_DEST
Choice 2 GC_FILES_TO_LOCKS
Choice 3 FAST_START_MTTR_TARGET
Choice 4 DB_BLOCK_BUFFERS
Choice 5 DB_BLOCK_LRU_LATCHES
You find that one of your tablespaces is running out of disk space.
Referring to the scenario above, which one of the following is NOT a valid option to increase the space available to the tablespace?
Choice 1 Move some segments to other tablespaces.
Choice 2 Resize an existing datafile in the tablespace.
Choice 3 Add another datafile to the tablespace.
Choice 4 Increase the MAX_EXTENTS for the tablespace.
Choice 5 Turn AUTOEXTEND on for one or more datafiles in the tablespace.
What tools or utilities do you use to transfer the data dictionary's structural information of transportable tablespaces?
Choice 1 DBMS_TTS
Choice 2 SQL*Loader
Choice 3 Operating System copy commands
Choice 4 DBMS_STATS
Choice 5 EXP and IMP
Which one of the following, if backed up, is potentially problematic to a complete recovery?
Choice 1
Control file
Choice 2
System Tablespace
Choice 3
Data tablespaces
Choice 4
Online Redo logs
Choice 5
All archived redologs after the last backup
Your database warehouse performs frequent full table scans. Your DB_BLOCK_SIZE is 16,384.
Referring to the scenario above, what parameter do you use to reduce disk I/O?
Choice 1 LOG_CHECKPOINT_TIMEOUT
Choice 2 DBWR_IO_SLAVES
Choice 3 DB_FILE_MULTIBLOCK_READ_COUNT
Choice 4 DB_WRITER_PROCESSES
Choice 5 DB_BLOCK_BUFFERS
Which one of the following describes the "Reset database to incarnation" command used by Recovery Manager?
Choice 1 It performs a resynchronization of online redo logs to a given archive log system change number (SCN).
Choice 2 It performs point-in-time recovery when using Recovery Manager.
Choice 3 It restores the database to the initial state in which it was found when first backing it up via Recovery Manager.
Choice 4 It restores the database to a save point as defined by the version control number or incarnation number of the database.
Choice 5 It is used to undo the effect of a resetlogs operation by restoring backups of a prior incarnation of the database.
You are using the CREATE TABLE statement to populate the data dictionary with metadata to allow access to external data, where /data is a UNIX writable directory and filename.dbf is an arbitrary name.
Referring to the scenario above, which clause must you add to your CREATE TABLE statement?
Choice 1
organization external
Choice 2 external file /data/filename.dbf
Choice 3 ON /data/filename.dbf
Choice 4 organization file
Choice 5 file /data/filename.dbf
Your business user has expressed a need to be able to revert back to data that are at most eight hours old. You decide to use Oracle 9i's FlashBack feature for this purpose.
Referring to the scenario above, what is the value of UNDO_RETENTION that supports this requirement?
Choice 1 480
Choice 2 8192
Choice 3 28800
Choice 4 43200
Choice 5 28800000
Materialized Views constitute which data warehousing feature offered by Oracle?
Choice 1 FlashBack Query
Choice 2 Summary Management
Choice 3 Dimension tables
Choice 4 ETL Enhancements
Choice 5 Updateable Multi-table Views
DB_BLOCK_SIZE=8192
DB_CACHE_SIZE=128M
DB_2K_CACHE_SIZE=64M
DB_4K_CACHE_SIZE=32M
DB_8K_CACHE_SIZE=16M
DB_16K_CACHE_SIZE=8M
Referring to the initialization parameter settings above, what is the size of the cache of standard block size buffers?
Choice 1 8 M
Choice 2 16 M
Choice 3 32 M
Choice 4 64 M
Choice 5 128 M
You need to send listener log information to the Oracle Support Services. The listener name is LSNRORA1.
Referring to the scenario above, which one of the following statements do you use in the listener.ora file to generate this log information?
Choice 1 TRACE_LEVEL_LSNRORA1=debug
Choice 2 TRACE_LEVEL_LSNRORA1=admin
Choice 3 TRACE_LEVEL_LSNRORA1=5
Choice 4 TRACE_LEVEL_LSNRORA1=support
Choice 5 TRACE_LEVEL_LSNRORA1=on
Which one of the following statements causes you to choose the NOARCHIVELOG mode for an Oracle database?
Choice 1
The database does not need to be available at all times.
Choice 2
The database is used for a DSS application, and updates are applied to it once in 48 hours.
Choice 3
The database needs to be available at all times.
Choice 4
It is unacceptable to lose any data if a disk failure damages some of the files that constitute the database.
Choice 5
There will be times when you will need to recover to a point-in-time that is not current.
You are experiencing performance problems due to network traffic. One way to tune this is by setting the SDU size.
Referring to the scenario above, why do you change the SDU size?
Choice 1 A large number of users are logged on concurrently to the system.
Choice 2 A high-speed network is available where the data transmission effect is negligible.
Choice 3 The data coming back from the server are fragmented into several packets.
Choice 4 The application can be tuned to account for the delays.
Choice 5 The requests to the database return small amounts of data as in an OLTP system.Post a few if you need answers to a few.
Anyway, my best shot:-
Q. Directories are different
A. Use db_file_name_convert why? read about it.
Q What facility does Oracle provide to detect chained and migrated rows after the proper tables have been created?
A.The ANALYZE command with the LIST CHAINED ROWS option
Q While doing an export, the following is encountered:
my best guess
Use the RESUMABLE=Y option for the export.
Q. The DBCA (Database Configuration Assistant) prompts the installer to enter the password for which default users?
A. SYS and SYSTEM
Q You are designing the physical database for an application that stores dates and times. This will be accessed by users from all over the world in different time zones. Each user needs to see the time in his or her time zone.
A. TIMESTAMP WITH LOCAL TIME ZONE
Q What command do you use to drop a temporary tablespace and the associated OS files?
A. ALTER DATABASE TEMPFILE '/data/oracle/temp01.dbf' DROP INCLUDING DATAFILES;
Q You wish to use a graphical interface to manage database locks and to identify blocking locks.
A. Lock Manager, a tool in the base Oracle Enterprise Manager (OEM) product, as well as the console
Q CREATE DATABASE abc
A. They cannot be changed unless you re-create your control file
Q You need to change the archivelog mode of an Oracle database.
A. Execute the archive log list command
Q When interpreting statistics from the v$sysstat, what factor do you need to keep in mind that can skew your statistics?
A.
Choice 3 The statistics gathered by v$sysstat include database startup activities and database activity that initially populates the database buffer cache and shared pool.
Q You want to shut down the database, but you do not want client connections to lose any non-committed work. You also do not want to wait for every open session to disconnect.
Choice 3 Shutdown transactional
Q What step or steps do you take to enable Automatic Undo Management (AUM)?
A.Choice 5 Add UNDO_MANAGEMENT=AUTO parameter to init.ora, create the UNDO tablespace, stop/start the database
Q What Oracle 9i feature allows the database administrator to create tablespaces, datafiles, and log groups WITHOUT specifying physical filenames?
A. Choice 4 Oracle Managed Files -
How to specify for JVM the stack for ONE specific thread that invokes JNI
Hello all!
I'm 2 days now looking for a solution in several forums but even in Sun Java Forums nobody was able to come up with a solution. If you know a better forum to place it, please let me know.
Description:
I have an application that launches several threads of different types. One of them is quite specific and run a critical JNI C++ process. The remaining ones are just for controling of other stuff... When I call the application by the command line
java -classpath ... -Xss20m -Djava.library.path ... pack.subpack.myApp
the application usually crashes: My computer has 256MB RAM of memory that vanish in seconds and causes an OutOfMemoryException
Sometimes the application works properly, but sometimes the memory usage goes up fast until a general crash.
What I believe that is going on:
When we declare -Xss20m, I undestand that for each thread the JVM will attemp to allocate more 20MB, and if I have 10 threads, it goes up to 200MB and so on; however I'd like to have 20MB just for my critical process (that is called in one specific thread) and not for any thread.
If I try to reduce -Xss parameter to, let's say 10MB, my C++ process overflow the JVM stack for the thread.
So, does any body have know how to solve it? Please... I need experts help!
Thanks a lot,
CalegariThere we go...
I have this class:
package calegari.automata;
* <p>Title: </p>
* <p>Description: </p>
* <p>Copyright: Copyright (c) 2003</p>
* <p>Company: </p>
* @author Aur�lio Calegari
* @version 1.0
public class Native {
* Parameters:
* individuals --> All binary individual (AC rule)
* indivLength --> number of infividuals
* numEval --> number of RIs
* generateUniform --> Uniform distribution of density
* seed --> seed for current generation
public native double[] AutomataIterator(int[][] individuals,
int indvLength,
int numEval,
boolean generateUniform,
long seed
static {
System.loadLibrary("Native");
public Native() {
Then, running
javah -classpath ... calegari.automata.Native
I'll get the following .h
/* DO NOT EDIT THIS FILE - it is machine generated */
#include "jni.h"
/* Header for class calegari_automata_Native */
#ifndef Includedcalegari_automata_Native
#define Includedcalegari_automata_Native
#ifdef __cplusplus
extern "C" {
#endif
* Class: calegari_automata_Native
* Method: AutomataIterator
* Signature: ([[IIIZJ)[D
JNIEXPORT jdoubleArray JNICALL
Java_calegari_automata_Native_AutomataIterator___3_3IIIZJ
(JNIEnv *, jobject, jobjectArray, jint, jint, jboolean, jlong);
#ifdef __cplusplus
#endif
#endif
Next, I built my cpp file which is right bellow
#include <stdio.h>
#include <stdlib.h>
#include <conio.h>
#include "jni.h"
#include "calegari_automata_Native.h"
#include "util.h"
double IndividualEvaluator(long rule[], int automataCells[][149], int numInit1s[],
int numOfACs, int numEvaluations);
char getNext(char simb);
JNIEXPORT jdoubleArray JNICALL
Java_calegari_automata_Native_AutomataIterator___3_3IIIZJ
(JNIEnv *env, jobject jobj, jobjectArray indiv, jint length, jint numEval,
jboolean isUniform, jlong seed)
printf("Native JVM call for C++ critical block: Started \a[-]");
int ACs[10000][149]; //Will be filled with 100000 Initial states of a cellular automata
int numIndiv = length;
int numOfACs = numEval;
//Dencity of each Initial state of cellular automata
int num1sRIs[10000];
//response
double resp[1000];
//set seed
srand((unsigned int) seed);
//generate Cellular automata states
//Uniform generation
if(isUniform)
for(int i=0;i<numEval;i++)
int num1s;
num1s=0;
for(int j=0;j<149;j++)
ACs[i][j] = ((rand()%numEval)<i+1?0:1);
if(ACs[i][j]==1) num1s++;
num1sRIs[i] = num1s;
printf(" %d ",num1s);
else //not uniform generation
for(int i=0;i<numEval;i++)
int num1s;
num1s=0;
for(int j=0;j<149;j++)
ACs[i][j] = rand()%2;
if(ACs[i][j]==1) num1s++;
num1sRIs[i] = num1s;
//load individuals and start the critical method
char simb = '-';
for(int i=0;i<numIndiv;i++)
jintArray oneDim = (jintArray) env->GetObjectArrayElement(indiv, i);
jint *indiv=env->GetIntArrayElements(oneDim, 0);
simb = getNext(simb);
printf("\b\b%c]",simb);
resp[i] = IndividualEvaluator(indiv,ACs,num1sRIs,numOfACs,300);
jdoubleArray retApts;
retApts = env->NewDoubleArray(numIndiv);
env->SetDoubleArrayRegion((jdoubleArray)retApts,(jsize)0,numIndiv,(jdouble *)resp);
printf("\nReturning to Java plataform: Completed\a\a\n");
return retApts;
char getNext(char simb)
if(simb=='-') simb = '\\';
else if(simb=='\\') simb = '|';
else if(simb=='|') simb = '/';
else if(simb=='/') simb = '-';
return simb;
Then it works fine since we declare the size of the stack to JVM, however, if we don't... We get a crash!
Any idea?
Thanks -
Hi!
I want to create a script that creates random (or near random) values for every single pixel of a document, similar to the "Add Noise..." filter, but with more control, such as "only b/w", "only grey", "all RGB" and "all RGB with alpha" and maybe even control over the probability distribution. Any idea how this could be tackled? Selecting every single pixel and applying a random color seems like something that would take a script hours...
Why do I need this?
I've started creating some filters in Pixel Bender (http://en.wikipedia.org/wiki/Adobe_Pixel_Bender). Since Pixel Bender doesn't really have any random generator (and workarounds are limited) I'm planning on passing on the random numbers through random pixel values. I'm well aware that this can only be used for filters in which Pixel Bender creates images from scratch, but that's the plan.
Thanks!Understanding the details of the Add Noise filter is probably beyond the scope of just a short post. Here is an approach to start learning what it does.
- Take a 50% gray level and make it a Smart Object.
- Open up the historgram panel (should show a spike right at 50%)
- Apply noise filter to Smart Object in monochrome building up from small percentages in small increments
- You will notice that for this option above, you end up with a uniform probability function over the entire tonality spread at 50% applied for uniform distribution.
There are a variety of ways to manipulate this function, through various blends.
Please note a couple things
1) I am using CS5 and though not documented anywhere that I have seen, the Noise Filter does work different than in CS4. In CS4, if you run the same noise filter twice on two identical objects, my experience is that you get the identical bit for bit result ( a random pattern yet not independent of the next run of the filter). Manipulating Probability Density Functions (PDFs) per my previous post requires that each run of the Noise Filter starts with a different "seed" so that the result is independent of the previous run. CS5 does this where succesive runs will create an independent noise result.
2) PS does not equally randomize R, G, and B. There are ways to get around this yet wanted to give you a heads up.
3) There are other ways to generate quick random patterns outside of PS and bring them in (using scripts). You would need to understand the format of the Photoshop Raw file. This type of file contains bytes with just the image pixel data. These types of files are easy to create and then load into PS. From a script (or even faster call a Python script) create this file and then load it into PS as a Photoshop Raw format file and use as an overlay. There is not question that this is faster than trying to manipulate individual PS pixesl through a script.
4) Please not the under Color Settings there is an option called Dither. If this is set, there are times where PS adds nosie into the image (I leave mine turned off). If is used in a number of places in PS other than what the documentation implies (more than just when moving between 8 bit color spaces)
Good luck if you are going after making a plug-in. I have never invested in that learning curve. Good luck. -
Hi ALL,
I am very new to BI and have got an issue at hand.
we have a vendor ageing report that is distributed month wise i.e 0-30, 31-60 and so on..is there a way by which i can change the months to weeks and get the report..
Regards
Arvind KumarHi,
How it is distributed month wise, assuming at transormation level...you can either enhance rule written for this distribution or load this data to new infoprovider and there distribute week wise.
Hope it helps...
Regards,
Ashish -
How do I change the width of shape lines
I created a pattern in Photoshop CS6 for a project with shape lines which reside on shape layers. I want to change the line weight but after I create them it seems there is no way of changing the shape; even after I select line.
I can change the spread (uniform distribution) of the lines and lengh but not their weight. No matter if I choose all or just one of the lines which are all on the same shape layer the weight does not change. This does not make sense. Can somebody please chime in.
Ed Carreon
www.carreonphotography.comIf you used the Line Tool, then you really created rectangles and the "Weight" option does not modify existing Lines. Scaling them simultaneously to become narrower/wider rectangles then redistributing them may solve your problem (their CS6 stroke attribute may complicate matters) so try that first.
-
Hi All.
I have a stored procedure which can be seen below. The way it should work is that it should loop through a lot of all the databases on the SQL instance and pass the details onto another stored procedure. The stored procedure is called from SQL agent,
and what i am finding is that when executed from SQL agent it only calls 2 databases. The first 2 on the list. There are 6 other databases, it ignores them. If I call the stored procedure from outside of SQL agent, then it covers all the databases that
it should cover. Its really strange and I cannot think of why we have this behaviour.
When I run thesame duplicate setup on UAT, it works fine. Everything in terms of the stored procedure is exactly thesame. I am only seeing this issue on production. Any ideas please.
CREATE procedure [dbbackup].[spbackup_databases] (@BackupSysyemDB bit = 1,@BackupUserDB bit = 1, @BackupType char(1) = 'D', @BackupDB varchar(128) = NULL )
as
Begin
DECLARE @ActiveBackupID [uniqueidentifier]
DECLARE @ActiveEnabled char(1)
DECLARE @ActiveBackupDb varchar(128)
DECLARE @ActiveBackupOrder tinyint
DECLARE @ActiveBackupDirectory nvarchar(max)
DECLARE @ActiveBackupType nvarchar(15)
DECLARE @ActiveFullRetainHours int
DECLARE @ActiveDiffRetainHours int
DECLARE @ActiveLogRetainHours int
DECLARE @ActiveVerifyBackup char(1)
DECLARE @ActiveCompress char(1)
DECLARE @ActiveCopyOnly char(1)
DECLARE @BackupBlocksize int
DECLARE @ActiveChangeBackupType char(1)
DECLARE @ActiveBackupSoftwareID tinyint
DECLARE @ActiveEnableChecksum char(1)
DECLARE @ActiveBackupBlocksize int
DECLARE @ActiveBackupMaxTransferSize int
DECLARE @ActiveBackupStripes tinyint
DECLARE @ActiveCompressionLevel tinyint
DECLARE @ActiveBackupDescription nvarchar(max)
DECLARE @ActiveThreads tinyint
DECLARE @ActiveThrottle tinyint
DECLARE @ActiveEncrypt char(1)
DECLARE @ActiveEncryptionAlgorithm varchar(30)
DECLARE @ActiveServerCertificate nvarchar(max)
DECLARE @ActiveEncryptionKey nvarchar(max)
DECLARE @ActiveBackupReadWriteFileGroups char(1)
DECLARE @ActiveOverrideBackupPreference char(1)
DECLARE @ActiveLogAction char(1)
DECLARE @ActiveExecuteAction char(1)
DECLARE @ActiveBackupBufferCount int
DECLARE @ActiveBackupSystemDb varchar(128)
DECLARE @ActiveSystemBackupDirectory nvarchar(max)
DECLARE @SystemBackupType char(1)
DECLARE @ActiveSystemVerifyBackup char(1)
DECLARE @ActiveSystemFullRetainHours int
DECLARE @ActiveRetentionHours int
DECLARE @DefaultEnabled char(1)
DECLARE @DefaultBackupOrder tinyint
DECLARE @DefaultBackupDirectory nvarchar(MAX)
DECLARE @DefaultBackupType char(1)
DECLARE @DefaultFullRetainHours int
DECLARE @DefaultDiffRetainHours int
DECLARE @DefaultLogRetainHours int
DECLARE @DefaultVerifyBackup char(1)
DECLARE @DefaultCompress char(1)
DECLARE @DefaultCopyOnly char(1)
DECLARE @DefaultChangeBackupType char(1)
DECLARE @DefaultBackupSoftwareID tinyint
DECLARE @DefaultEnableChecksum char(1)
DECLARE @DefaultBackupBlocksize int
DECLARE @DefaultBackupBufferCount int
DECLARE @DefaultBackupMaxTransferSize int
DECLARE @DefaultBackupStripes tinyint
DECLARE @DefaultCompressionLevel tinyint
DECLARE @DefaultBackupDescription nvarchar(MAX)
DECLARE @DefaultThreads tinyint
DECLARE @DefaultThrottle tinyint
DECLARE @DefaultEncrypt char(1)
DECLARE @DefaultEncryptionAlgorithm varchar(30)
DECLARE @DefaultServerCertificate nvarchar(MAX)
DECLARE @DefaultEncryptionKey nvarchar(MAX)
DECLARE @DefaultBackupReadWriteFileGroups char(1)
DECLARE @DefaultOverrideBackupPreference char(1)
DECLARE @DefaultLogAction char(1)
DECLARE @DefaultExecuteAction char(1)
DECLARE @BackupDirectory nvarchar(255)
DECLARE @Error int
DECLARE @ErrorMessage nvarchar(MAX)
DECLARE @ErrorMessageOriginal nvarchar(MAX)
DECLARE @DefaultBackupKeysFolder nvarchar(maX)
DECLARE @DefaultBackupCertificatesFolder nvarchar(maX)
DECLARE @DefaultBackupSystemDbsFolder nvarchar(maX)
DECLARE @DefaultSnapshotLocation nvarchar(maX)
-- Initiate Backup Configuration Default Values
EXEC dbbackup.spinitiatebackup
-- Load defaults and insert into table if not present.
-- The default is full database backups as standard
SELECT
@DefaultEnabled = [Enabled]
,@DefaultBackupOrder = [BackupOrder]
,@DefaultBackupDirectory = [BackupDirectory]
,@DefaultBackupType = [BackupType]
,@DefaultFullRetainHours = [FullRetainHours]
,@DefaultDiffRetainHours = [DiffRetainHours]
,@DefaultLogRetainHours = [LogRetainHours]
,@DefaultVerifyBackup = [VerifyBackup]
,@DefaultSnapshotLocation = [DBCCFolder]
,@DefaultCompress = [Compress]
,@DefaultCopyOnly = [CopyOnly]
,@DefaultChangeBackupType = [ChangeBackupType]
,@DefaultBackupSoftwareID = [BackupSoftwareID]
,@DefaultEnableChecksum = [EnableChecksum]
,@DefaultBackupBlocksize = [BackupBlocksize]
,@DefaultBackupBufferCount = [BackupBufferCount]
,@DefaultBackupMaxTransferSize = [BackupMaxTransferSize]
,@DefaultBackupStripes = [BackupStripes]
,@DefaultCompressionLevel = [CompressionLevel]
,@DefaultBackupDescription = [BackupDescription]
,@DefaultThreads = [Threads]
,@DefaultThrottle = [Throttle]
,@DefaultEncrypt = [Encrypt]
,@DefaultEncryptionAlgorithm = [EncryptionAlgorithm]
,@DefaultServerCertificate = [ServerCertificate]
,@DefaultEncryptionKey = [EncryptionKey]
,@DefaultBackupReadWriteFileGroups = [BackupReadWriteFileGroups]
,@DefaultOverrideBackupPreference = [OverrideBackupPreference]
,@DefaultLogAction = [LogAction]
,@DefaultExecuteAction = [ExecuteAction]
FROM [dbbackup].[backupconfig]
INSERT INTO [dbbackup].[db_backupconfig]
[BackupID]
,[BackupDb]
,[Enabled]
,[BackupOrder]
,[BackupDirectory]
,[BackupType]
,[FullRetainHours]
,[DiffRetainHours]
,[LogRetainHours]
,[VerifyBackup]
,[DBCCSnapFolder]
,[Compress]
,[CopyOnly]
,[ChangeBackupType]
,[BackupSoftwareID]
,[EnableChecksum]
,[BackupBlocksize]
,[BackupBufferCount]
,[BackupMaxTransferSize]
,[BackupStripes]
,[CompressionLevel]
,[BackupDescription]
,[Threads]
,[Throttle]
,[Encrypt]
,[EncryptionAlgorithm]
,[ServerCertificate]
,[EncryptionKey]
,[BackupReadWriteFileGroups]
,[OverrideBackupPreference]
,[LogAction]
,[ExecuteAction]
SELECT
newid()
,s.name
,@DefaultEnabled
,@DefaultBackupOrder
,@DefaultBackupDirectory
,@BackupType
,@DefaultFullRetainHours
,@DefaultDiffRetainHours
,@DefaultLogRetainHours
,@DefaultVerifyBackup
,@DefaultSnapshotLocation
,@DefaultCompress
,@DefaultCopyOnly
,@DefaultChangeBackupType
,@DefaultBackupSoftwareID
,@DefaultEnableChecksum
,@DefaultBackupBlocksize
,@DefaultBackupBufferCount
,@DefaultBackupMaxTransferSize
,@DefaultBackupStripes
,@DefaultCompressionLevel
,@DefaultBackupDescription
,@DefaultThreads
,@DefaultThrottle
,@DefaultEncrypt
,@DefaultEncryptionAlgorithm
,@DefaultServerCertificate
,@DefaultEncryptionKey
,@DefaultBackupReadWriteFileGroups
,@DefaultOverrideBackupPreference
,@DefaultLogAction
,@DefaultExecuteAction
FROM master.dbo.sysdatabases s
WHERE s.name not in (select BackupDb from dbbackup.[db_backupconfig] where BackupType = @BackupType)
AND DATABASEPROPERTY([s].name, 'IsSuspect') = 0
AND DATABASEPROPERTY([s].name, 'IsShutdown') = 0
AND DATABASEPROPERTY([s].name, 'IsOffline') = 0
AND DATABASEPROPERTY([s].name, 'IsInLoad') = 0
AND DATABASEPROPERTY([s].name, 'IsInRecovery') = 0
AND isnull(DATABASEPROPERTY([s].name, 'IsNotRecovered'),0) = 0 -- 2012 bug
AND DATABASEPROPERTY([s].name, 'IsReadOnly') = 0
AND [s].name NOT IN ('master', 'msdb', 'model', 'distribution', 'tempdb')
-- LOAD CURSOR
DECLARE db_cursor CURSOR FOR
SELECT
s.[name]
,[Enabled]
,[BackupOrder]
,[BackupDirectory]
,bt.[BackupName]
,[FullRetainHours]
,[DiffRetainHours]
,[LogRetainHours]
,[VerifyBackup]
,[Compress]
,[CopyOnly]
,[ChangeBackupType]
,[BackupSoftwareID]
,[EnableChecksum]
,[BackupBlocksize]
,[BackupBufferCount]
,[BackupMaxTransferSize]
,[BackupStripes]
,[CompressionLevel]
,[BackupDescription]
,[Threads]
,[Throttle]
,[Encrypt]
,[EncryptionAlgorithm]
,[ServerCertificate]
,[EncryptionKey]
,[BackupReadWriteFileGroups]
,[OverrideBackupPreference]
,[LogAction]
,[ExecuteAction]
FROM [dbbackup].[db_backupconfig] bc
INNER JOIN [dbbackup].backuptype bt ON bt.BackupType = bc.BackupType
LEFT JOIN sys.databases [s]
ON [s].[name] collate database_default = [bc].[BackupDb]
WHERE [Enabled] = 'Y'
AND bc.[BackupType] = @BackupType
AND (@BackupDB is NULL OR bc.[BackupDb] = @BackupDB)
AND @BackupUserDB = 1
ORDER BY [BackupOrder] ASC
OPEN db_cursor
FETCH NEXT FROM db_cursor
INTO
@ActiveBackupDb,
@ActiveEnabled,
@ActiveBackupOrder,
@ActiveBackupDirectory,
@ActiveBackupType,
@ActiveFullRetainHours,
@ActiveDiffRetainHours,
@ActiveLogRetainHours,
@ActiveVerifyBackup,
@ActiveCompress,
@ActiveCopyOnly,
@ActiveChangeBackupType,
@ActiveBackupSoftwareID,
@ActiveEnableChecksum,
@ActiveBackupBlocksize,
@ActiveBackupBufferCount,
@ActiveBackupMaxTransferSize,
@ActiveBackupStripes,
@ActiveCompressionLevel,
@ActiveBackupDescription,
@ActiveThreads,
@ActiveThrottle,
@ActiveEncrypt,
@ActiveEncryptionAlgorithm,
@ActiveServerCertificate,
@ActiveEncryptionKey,
@ActiveBackupReadWriteFileGroups,
@ActiveOverrideBackupPreference,
@ActiveLogAction,
@ActiveExecuteAction
WHILE @@FETCH_STATUS = 0
BEGIN
SELECT @ActiveRetentionHours = CASE
WHEN @ActiveBackupType = 'DIFF' THEN @ActiveDiffRetainHours
WHEN @ActiveBackupType = 'LOG' THEN @ActiveLogRetainHours
WHEN @ActiveBackupType = 'FULL' THEN @ActiveFullRetainHours
ELSE
144
END
IF (@ActiveBackupSoftwareID = 1) SET @ActiveBackupSoftwareID = NULL
SET @ActiveBackupDescription = 'Backup for ' + @ActiveBackupDb + ' Type = ' + @BackupType + ' ON ' + CONVERT(VARCHAR(30),GETDATE(),113)
select @ActiveBackupDb,
@ActiveEnabled,
@ActiveBackupOrder,
@ActiveBackupDirectory,
@ActiveBackupType,
@ActiveFullRetainHours,
@ActiveDiffRetainHours,
@ActiveLogRetainHours,
@ActiveVerifyBackup,
@ActiveCompress,
@ActiveCopyOnly,
@ActiveChangeBackupType,
@ActiveBackupSoftwareID,
@ActiveEnableChecksum,
@ActiveBackupBlocksize,
@ActiveBackupBufferCount,
@ActiveBackupMaxTransferSize,
@ActiveBackupStripes,
@ActiveCompressionLevel,
@ActiveBackupDescription,
@ActiveThreads,
@ActiveThrottle,
@ActiveEncrypt,
@ActiveEncryptionAlgorithm,
@ActiveServerCertificate,
@ActiveEncryptionKey,
@ActiveBackupReadWriteFileGroups,
@ActiveOverrideBackupPreference,
@ActiveLogAction,
@ActiveExecuteAction
EXEC [dbbackup].[spbackup_db]
@Databases = @ActiveBackupDb,
@Directory = @ActiveBackupDirectory,
@BackupType = @ActiveBackupType,
@Verify = @ActiveVerifyBackup,
@CleanupTime = @ActiveRetentionHours,
@Compress = @ActiveCompress,
@CopyOnly = @ActiveCopyOnly,
@ChangeBackupType = @ActiveChangeBackupType,
@BackupSoftware = @ActiveBackupSoftwareID,
@CheckSum = @ActiveEnableChecksum,
@BlockSize = @ActiveBackupBlocksize,
@BufferCount = @ActiveBackupBufferCount,
@MaxTransferSize =@ActiveBackupMaxTransferSize,
@NumberOfFiles = @ActiveBackupStripes,
@CompressionLevel = @ActiveCompressionLevel,
@Description = @ActiveBackupDescription,
@Threads = @ActiveThreads,
@Throttle = @ActiveThrottle,
@Encrypt = @ActiveEncrypt,
@EncryptionAlgorithm = @ActiveEncryptionAlgorithm,
@ServerCertificate = @ActiveServerCertificate,
@ServerAsymmetricKey = @ActiveEncryptionKey,
@EncryptionKey = @ActiveEncryptionKey,
@ReadWriteFileGroups = @ActiveBackupReadWriteFileGroups,
@OverrideBackupPreference = @ActiveOverrideBackupPreference,
@LogToTable = @ActiveLogAction,
@Execute = @ActiveExecuteAction
FETCH NEXT FROM db_cursor
INTO
@ActiveBackupDb,
@ActiveEnabled,
@ActiveBackupOrder,
@ActiveBackupDirectory,
@ActiveBackupType,
@ActiveFullRetainHours,
@ActiveDiffRetainHours,
@ActiveLogRetainHours,
@ActiveVerifyBackup,
@ActiveCompress,
@ActiveCopyOnly,
@ActiveChangeBackupType,
@ActiveBackupSoftwareID,
@ActiveEnableChecksum,
@ActiveBackupBlocksize,
@ActiveBackupBufferCount,
@ActiveBackupMaxTransferSize,
@ActiveBackupStripes,
@ActiveCompressionLevel,
@ActiveBackupDescription,
@ActiveThreads,
@ActiveThrottle,
@ActiveEncrypt,
@ActiveEncryptionAlgorithm,
@ActiveServerCertificate,
@ActiveEncryptionKey,
@ActiveBackupReadWriteFileGroups,
@ActiveOverrideBackupPreference,
@ActiveLogAction,
@ActiveExecuteAction
END
CLOSE db_cursor
DEALLOCATE db_cursor
-- Deal with system databases
-- Read parameters
IF (@BackupSysyemDB = 1)
BEGIN
SELECT @ActiveBackupSystemDb = sysbk.[BackupDb],
@ActiveSystemBackupDirectory = sysbk.[BackupDirectory],
@ActiveSystemVerifyBackup = sysbk.VerifyBackup,
@ActiveSystemFullRetainHours = sysbk.[FullRetainHours]
FROM [dbbackup].[system_backupconfig] sysbk
WHERE [BackupDb] = 'SYSTEM_DATABASES'
EXEC [dbbackup].[spbackup_db]
@Databases = 'SYSTEM_DATABASES',
@Directory = @ActiveSystemBackupDirectory,
@Verify = @ActiveSystemVerifyBackup,
@CleanupTime = @ActiveSystemFullRetainHours,
@BackupType = 'FULL'
END
End
GODoes your SQLAgent account has enough permissions to do this? Is SQLAgent given sysadmin permissions?
Which account does you use in SSMS to run this stored procedure?
Regards, Ashwin Menon My Blog - http:\\sqllearnings.com
It has the permissions as its sysadmin, if it didnt then the 2 data sets would not be returned. for some reason its only looping twice. could this be something to do with the cursor type ? -
PVM OL6 kickstart install challenges on Oracle VM 3.1.1.544
Thank you all in advance for your support!
Environment:
Oracle VM 3.1.1.544 patched
4 Dell M610's, 1 with Oracle VM Manager (4 NICS, 2 active, eth3/private and eth0/public), 3 Oracle VM servers (4 NICS, 2 bonds w 802.1q VLANS, 3 private non routable management (management, heartbeat, live migration) VLANs on bond0 and 4 public routable VLANs for VMs on bond1.
The dom0s are on private non routable networks accessible via an interface on the private management network channel on the oracle vm manager host.
The dom0s have no access to public networks.
We have multiple web servers with kickstart files and installation trees on multiple public networks to support OL 5.8 and 6.2 kickstart installs.
Requirement:
Use existing Linux kickstart process (customer has been running RHEL for years), i.e. use existing web servers with kickstart files (with slight modifications for ovm) and install trees for Oracle Linux installs on Oracle VM. Customer is new to OVM.
Challenge:
We create a OL 6.2 VM in Oracle VM Manager and have tried the following settings in the "Boot Order → Network Boot Path":
Please note that the 10.2.x.x addresses are on a public network and the 192.168.x.x address is on a private network. The public networks are accessible by domUs and web servers with ks/install trees, not by dom0s.
Dom0:
/usr/bin/xenpvboot http://192.168.x.x/ks/oel62/ kernel=isolinux ramdisk=isolinux --args="http://192.168.x.x/ks/ol62.ks ip 10.2.x.x netmask 255.255.0.0 gateway 10.2.x.x nameserver 10.2.x.x,10.3.x.x hostname ol62test.ovm.com"
The above command generates a clean response, i.e. no errors.
DomU - Boot Order → Network Boot Path:
--args ks=http://192.168.x.x/ks/ol62.ks http://192.168.x.x/ks/oel62
The result from the above example is that the VM boots and then stops on the network manager screen. We view other ttys and see that eth0 is unable to start.
DomU - Boot Order → Network Boot Path:
--args= http://10.2.x.x/stage/ol62test.ks http://192.168.x.x/ks/oel62/
DomU - Boot Order → Network Boot Path:
http://192.168.x.x/ks/oel62/ --args="http://10.2.x.x/stage/ol62test.ks ip 10.2.x.x netmask 255.255.0.0 gateway 10.2x.x.x nameserver 10.2.x.x,10.4.x.x hostname ol62test.ovm.com"
The results from the above two examples is that the VM boots but the ks file does no invoke. The httpd access logs show the boot process and do not show the ks file being accessed.
Questions:
1. Could you please help confirm the PVM kickstart workflow between the dom0 (private) and domU (public).
1. For example, could you please help confirm the following?:
2. step 1 – start a VM with Boot Order → Network Boot Path: --args ks=http://192.168.x.x/ks/ol62.ks http://192.168.x.x/ks/oel62
3. dom0 accesses the install tree via the server management network channel, i.e. http://192.168.x.x/ks/oel62/ and boots the VM
4. domU gets an IP from either a) –args in “ Boot Order → Network Boot Path” or b) from a kickstart file
5. all good and hit the pub :-)
Thank you again for your support!2. I can confirm this works with PVM guests. I don't run OL 6 so I can't confirm 6. OL5 is rock solid when it comes to oracle products.
--args ks=http://ipaddress/test/ks.cfg http://ipaddressofhttpserver/distrubutioniso/
4. I always set a static DHCP mapping for the MAC for the VM guest. I never try passing the ip on the boot line. If I were to do it... I would try adding ksdevice=eth0 and then ip params.
The /distrubutioniso/ only gives you the boot files to boot the VM guest. The ks= value serves the kickstart file/script which then details the mount point for the distribution to load.
You should list /distubutioniso/ last. Not before the --args. At least that is the way I've always done it. -
Looking up a distributed queue with two persistent stores using two JMS Svr
I am trying to do the following:
1. Setup a distributed queue, i have two servers which are part of the cluster:
Server A: t3://localhost:7003
Server B: t3://localhost:7005
I go in and create two jms servers:
JMS Server JA: Target on Server A
JMS Server JB: Target on Server B
Now as the jms servers need to use a seperate persistent store for each one of them i create two persistent stores.
2. Now from my MDB which is deployed on another server i lookup the queue using
t3://localhost:7003,localhost:7005 as the provider url
My problem is that i always end up listening to the messages on the first jms server and never get to read the messages on jms server JB as i guess i am able to connect to JMS Server JA so i never try and connect to JB? What to do about this?
Edited by: user4828945 on Mar 23, 2009 2:32 PMAllocation of consumers wouldn't take into account the number of messages on the queue - they'd be allocated randomly. The scenario you're proposing shouldn't happen though - WebLogic Server takes into account whether a member has consumers when sending to a distributed destination, but otherwise, assuming that Queue 1 and Queue 2 both have consumers, then distribution of load will be equal. It's not the amount of consumers that determine how many messages get sent to a distributed destination member - it's whether it has members at all.
Assuming that did occur initially though, you'd expect processing to be a little bit more intensive on the server with the queue holding 30 messages. It would pretty quickly even up though.
From that point forward, it would be somewhere between difficult and impossible to get to the second scenario, where you have an unequal number of messages in each distributed destination member, unless the work being sent with each message to an MDB can vary significantly in how long it would take to process.
Assuming (and it's a big if) you could get to that scenario, then the MDBs wouldn't switch over - they stay connected to a particular distributed destination member. And it's their connection to a member as a consumer that controls how WebLogic Server load balances messages (assuming default configuration) so that's part of what makes it unlikely to get there.
From going back to first principles in the documentation, it seems like your best result would actually be from deploying the MDB to the cluster - that way, there's no remote connections to JMS queues, and you get a pool of MDBs on each server instance.
Ref here: http://e-docs.bea.com/wls/docs81/ejb/message_beans.html -
Ore.doEval funciton returning the error
Hi,
I have just started leaning Oracle R enterprise.
I have installed Database 11.2.0.3 and R in Oracle Linux guest machine. and connecting from windows8 host machine. I am able successfully configure the client and server and able to connect to database.
But, when I use"ore.doEval" function , I am getting the following error. Could you please anyone suggest me.....
> library(ORE)
> ore.connect(user="rquser", sid="orcl",host="linux02.linux", password="admin123",port=1521,all=TRUE)
> ore.is.connected()
[1] TRUE
> ore.doEval(function(){123})
Error in .oci.GetQuery(conn, statement, data = data, prefetch = prefetch, :
Error in try({ : ORA-20000: RQuery error
Error in dyn.load(file, DLLpath = DLLpath, ...) :
unable to load shared object '/u01/app/oracle/product/11.2.0.3/R/library/OREserver/libs/OREserver.so':
libgfortran.so.1: cannot open shared object file: No such file or directory
ORA-06512: at "RQSYS.RQEVALIMPL", line 104
ORA-06512: at "RQSYS.RQEVALIMPL", line 101
Regards,
Ramu.got the below solution from metalink and it worked.
Oracle Database - Enterprise Edition - Version 11.2.0.1.0 to 11.2.0.4 [Release 11.2]
Information in this document applies to any platform.
Symptoms
The following error occurs:
[oracle@ore ~]$ R
Oracle Distribution of R version 3.0.1 (--) -- "Good Sport"
Copyright (C) The R Foundation for Statistical Computing
Platform: x86_64-unknown-linux-gnu (64-bit)
R is free software and comes with ABSOLUTELY NO WARRANTY.
You are welcome to redistribute it under certain conditions.
Type 'license()' or 'licence()' for distribution details.
Natural language support but running in an English locale
R is a collaborative project with many contributors.
Type 'contributors()' for more information and
'citation()' on how to cite R or R packages in publications.
Type 'demo()' for some demos, 'help()' for on-line help, or
'help.start()' for an HTML browser interface to help.
Type 'q()' to quit R.
You are using Oracle's distribution of R. Please contact
Oracle Support for any problems you encounter with this
distribution.
Loading required package: OREbase
Loading required package: utils
Attaching package: ??OREbase??
The following objects are masked from ??package:base??:
cbind, data.frame, eval, interaction, order, paste, pmax, pmin,
rbind, table
Loading required package: OREembed
Loading required package: OREstats
Loading required package: stats
Loading required package: MASS
Loading required package: grDevices
Loading required package: graphics
Loading required package: OREgraphics
Loading required package: OREeda
Loading required package: OREmodels
Loading required package: OREdm
Loading required package: lattice
Loading required package: OREpredict
Loading required package: ORExml
Loading required package: ROracle
Loading required package: DBI
Connected on Database 12c with ORE 1.4
> ore.ls()
[1] "IRIS" "ONTIME"
> ore.doEval(function() { 123 })
Error in .oci.GetQuery(conn, statement, data = data, prefetch = prefetch, :
Error in try({ : ORA-20000: RQuery error
Error in dyn.load(file, DLLpath = DLLpath, ...) :
unable to load shared object '/u01/app/oracle/product/12.1.0.1/dbhome_1/R/library/OREserver/libs/OREserver.so':
libgfortran.so.1: cannot open shared object file: No such file or directory
ORA-06512: at "RQSYS.RQEVALIMPL", line 104
ORA-06512: at "RQSYS.RQEVALIMPL", line 101
Cause
This is caused by the required library 'libgfortran.1.0'.
Solution
Install the "compat-libgfortran-41-4.1.2-39.el6.x86_64.rpm", i.e.:
[oracle@ore ~]$ R
Oracle Distribution of R version 3.0.1 (--) -- "Good Sport"
Copyright (C) The R Foundation for Statistical Computing
Platform: x86_64-unknown-linux-gnu (64-bit)
R is free software and comes with ABSOLUTELY NO WARRANTY.
You are welcome to redistribute it under certain conditions.
Type 'license()' or 'licence()' for distribution details.
Natural language support but running in an English locale
R is a collaborative project with many contributors.
Type 'contributors()' for more information and
'citation()' on how to cite R or R packages in publications.
Type 'demo()' for some demos, 'help()' for on-line help, or
'help.start()' for an HTML browser interface to help.
Type 'q()' to quit R.
You are using Oracle's distribution of R. Please contact
Oracle Support for any problems you encounter with this
distribution.
Loading required package: OREbase
Loading required package: utils
Attaching package: ??OREbase??
The following objects are masked from ??package:base??:
cbind, data.frame, eval, interaction, order, paste, pmax, pmin,
rbind, table
Loading required package: OREembed
Loading required package: OREstats
Loading required package: stats
Loading required package: MASS
Loading required package: grDevices
Loading required package: graphics
Loading required package: OREgraphics
Loading required package: OREeda
Loading required package: OREmodels
Loading required package: OREdm
Loading required package: lattice
Loading required package: OREpredict
Loading required package: ORExml
Loading required package: ROracle
Loading required package: DBI
Connected on Database 12c with ORE 1.4
> ore.ls()
[1] "IRIS" "ONTIME"
> ore.doEval(function() { 123 })
[1] 123 -
What is data being Skewed in a TABLE COLUMN in Oracle
Hi All,
I am going through numerous articles in Oracle to understand Cardinality,selectivity,histograms and Data being skewed to understand the characterstics of Oracle Optimizer.I have a hard time understanding the concept of data being skewed for a column in a table in Oracle.I need your help in understanding it.
Thanks,
Rav.>
With out understanding the proper meaning of Skewness I assumed that a col data is skewed if the NDV of the col is high.
Your previoius statement raised this question now
Is there a corelation between number of distinct values in the column and the Skewness of a column?
>
See this Oracle white paper on Understanding Optimizer Statistics
http://www.oracle.com/technetwork/database/focus-areas/bi-datawarehousing/twp-optimizer-stats-concepts-110711-1354477.pdf
The section on Table and Column Statistics on page 2 has information that will clear this up
>
For example, if a table has 100 records, and the table access evaluates an equality predicate on a column that has 10 distinct values, then the Optimizer, assuming uniform data distribution, estimates the cardinality to be the number of rows in the table divided by the number of distinct values for the column or 100/10 = 10.
Histograms tell the Optimizer about the distribution of data within a column. By default (without a histogram), the Optimizer assumes a uniform distribution of rows across the distinct values in a column. As described above, the Optimizer calculates the cardinality for an equality predicate by dividing the total number of rows in the table by the number of distinct values in the column used in the equality predicate. If the data distribution in that column is not uniform (i.e., a data skew) then the cardinality estimate will be incorrect. In order to accurately reflect a non-uniform data distribution, a histogram is required on the column. The presence of a histogram changes the formula used by the Optimizer to estimate the cardinality, and allows it to generate a more accurate execution plan.
>
Did you notice the phrase 'if the data distribution in that column is not uniform (i.e. a data skew)'?
The NDV doesn't matter. It is the relative magnitude of each distinct value that determines the skew.
SB's example of gender, male or female, is not skewed if the number of 'males' is approximately equal to the number of 'females'.
But if the number of one of them, 'male' for example, is substantially larger than the number of the other then the data is skewed.
Assume a table has 1 million rows. If only one row is 'male' it makes sense to use an index to find it. But it wouldn't make sense to use that index if you were looking for 'female' since all rows but one are 'female'.
But because the NDV is two and there are 1 million rows then without a histogram Oracle would assume a uniform distribution and the query 'WHERE GENDER = 'male' would use a full table scan instead of the index.
Maybe you are looking for
-
When I switched to 10.4, iCal no longer printed the same way. It used to print a block around the event and it would be in the color you choose when you printed the weekly view. Now it just prints a color bar on the left. You can't see color of the e
-
Faults in iPad 2 after installing iOS 6' ??? why
After installing the new OS 6.q.1, last week, several glitches ie Music files don't open and the app freezes, hotlinks in Mail freeze both Mail and Safari - will reinstalling OS cure this ??.?
-
Hi, our ADF developers have proposed moving to database resource bundles and was wondering if this was something we could consider for our portal Java portlets as well? Is this technique unique to ADF or should it be applicable across all Java develo
-
What should be my database parameters?
Hi all, I have got a .dmp file of 30GB. I need to import this file. But before importing I need to create a database on my end. Can anyone please guide me how should I design my database? Thanks in advance Himanshu
-
Dependencies to be checked while transporting workflow
Hi all, I have to issue workflow object in the CR which will be transported to a different SAP system. I want to know how to get all the dependend objects of that workflow so that corresponding objects requests can also be saved in the same CR so tha