OLTP environment testing
Hi,
Im using 11g/r2.
Im trying to simulate why Flash SSD is poor for use as tablespace storage in an OLTP system.
Could anyone suggest the best way of setting up / simulating an OLTP system in oracle, ... i.e. tables to create and a script to run on them?
Any help would be appriciated
I would welcome your opinion...
That is the point of my research to determine wether MLC SSD is apropriate for use in OLTLP systems.
Many academic papers, state that because MLC SSD flash can't do in-place updates then this inhibits its use in OLTP systems (poor performance with no-sequential writes and updates), and thats what is also said at Burleson Consulting...
"DRAM SSD vs. Flash SSD
With all the talk about the Oracle “flash cache”, it is important to note that there are two types of SSD, and only DRAM SSD is suitable for Oracle database storage. The flash type SSD suffers from serious shortcomings, namely a degradation of access speed over time. At first, Flash SSD is 5 times faster than a platter disk, but after some usage the average read time becomes far slower than a hard drive. For Oracle, only rack-mounted DRAM SSD is acceptable for good performance:"
http://www.dba-oracle.com/art_dbazine_ault_ssd.htm
I have previously ran tests on column based OLAP systems, where Flash SSD proves to be a great advantage (but bearing in mind these are write once, read many)
My experimentation is to see if this is true or not. Im using a $200 MLC Flash SSD (64bg)... the aim of my experimentation is to determine its applicability in an OLTP environment.
Similar Messages
-
Local Module (Error-Handler) Failed the verify environment test
When attempting to start the Messaging Server, the following error
message occurs and the server fails to start:
<P>
19980226114147:Dispatch:Notification:Local Module (Error-Handler) Failed
the verify environment test.
<BR>Module not Loaded.
<BR>Startup Problem:
<BR>Module Error-Handler is required for proper operation.
<BR>Netscape Messaging Server Exiting!
<P>
This problem can be caused by a corruption to the configuration file
for the admin server that controls the Messaging server
(install_dir/admin-serv/ns-admin.conf).
Specifically, if the
<B>Port</B> setting is missing, the above-mentioned error will occur.
<P>
Add a valid <B>Port</B> setting to the
ns-admin.conf file and
rerun the /etc/NscpMail start command.Is anyone having any idea about it...
-
SharePoint 2010 backup and restore to test SharePoint environment - testing Disaster recovery
We have a production SharePoint 2010 environment with one Web/App server and one SQL server.
We have a test SharePoint 2010 environment with one server (Sharepoint server and SQL server) and one AD (domain is different from prod environment).
Servers are Windows 2008 R2 and SQL 2008 R2.
Versions are the same on prod and test servers.
We need to setup a test environment with the exact setup as production - we want to try Disaster recovery.
We have performed backup of farm from PROD and we wanted to restore it on our new server in test environment.Backup completed successfully with no errors.
We have tried to restore the whole farm from that backup on test environment using Central administration, but we got message - restore failed with errors.
We choosed NEW CONFIGURATION option during restore, and we set new database names...
Some of the errors are:
FatalError: Object User Profile Service Application failed in event OnPreRestore. For more information, see the spbackup.log or sprestore.log file located in the backup directory.
SPException: The specified user or domain group was not found.
Warning: Cannot restore object User Profile Service Application because it failed on backup.
FatalError: Object User Profile Service Application failed in event OnPreRestore. For more information, see the spbackup.log or sprestore.log file located in the backup directory.
Verbose: Starting object: WSS_Content_IT_Portal.
Warning: [WSS_Content_IT_Portal] A content database with the same ID already exists on the farm. The site collections may not be accessible.
FatalError: Object WSS_Content_IT_Portal failed in event OnRestore. For more information, see the spbackup.log or sprestore.log file located in the backup directory.
SPException: The specified component exists. You must specify a name that does not exist.
Warning: [WSS_Content_Portal] The operation did not proceed far enough to allow RESTART. Reissue the statement without the RESTART qualifier.
RESTORE DATABASE is terminating abnormally.
FatalError: Object Portal - 80 failed in event OnRestore. For more information, see the spbackup.log or sprestore.log file located in the backup directory.
ArgumentException: The IIS Web Site you have selected is in use by SharePoint. You must select another port or hostname.
FatalError: Object Access Services failed in event OnRestore. For more information, see the spbackup.log or sprestore.log file located in the backup directory.
SPException: Object parent could not be found. The restore operation cannot continue.
FatalError: Object Secure Store Service failed in event OnRestore. For more information, see the spbackup.log or sprestore.log file located in the backup directory.
SPException: The specified component exists. You must specify a name that does not exist.
FatalError: Object PerformancePoint Service Application failed in event OnRestore. For more information, see the spbackup.log or sprestore.log file located in the backup directory.
SPException: Object parent could not be found. The restore operation cannot continue.
FatalError: Object Search_Service_Application_DB_88e1980b96084de984de48fad8fa12c5 failed in event OnRestore. For more information, see the spbackup.log or sprestore.log file located in the backup directory.
Aborted due to error in another component.
Could you please help us to resolve these issues?I'd totally agree with this. Full fledged functionality isn't the aim of DR, getting the functional parts of your platform back-up before too much time and money is lost.
Anything I can add would be a repeat of what Jesper has wisely said but I would very much encourage you to look at these two resources: -
DR & back-up book by John Ferringer for SharePoint 2010
John's back-up PowerShell Script in the TechNet Gallery
Steven Andrews
SharePoint Business Analyst: LiveNation Entertainment
Blog: baron72.wordpress.com
Twitter: Follow @backpackerd00d
My Wiki Articles:
CodePlex Corner Series
Please remember to mark your question as "answered" if this solves (or helps) your problem. -
Backup method in OLTP environment
Which one of backup strategy would be best for 24/7 OLTP 9i/10g databases with no standby servers on unix clustered machines? Suggestions/comments?
Hi..
Which one of backup strategy would be best for 24/7 OLTP 9i/10g databases with no standby servers on unix clustered machines? Suggestions/comments?
RMAN backup is the easiest and the best method of taking the backup.Some additional features of RMAN are in 10g compared to 9i like for example, you can make the backup pieces compressed, you can use catalog backuppieces in 10g
for reference
10g RMAN -- [http://download.oracle.com/docs/cd/B19306_01/backup.102/b14191/part1.htm#i996723]
9i rman -- [http://download.oracle.com/docs/cd/B10501_01/server.920/a96566/preface.htm#972108]
HTH
Anand -
Create Test Environment - Best Practice
HI all..
Please, someone know which is the best practice to create a environment test from our company DB for testing purpose?
OUR SCENARIO:
SAP B1 9.1 PL 6
MSSQL 2008 R2
Integration service activated
WINDOWS 2008 R2 Ent Edition for client machine and server machine
Thank's in advance
--LUCAHi Manish..
I would like to have a copy of our company DB for testing purpose..
It's not important integration service.
We would like to test 9.1 PL 06 BoM new features without modify the rpoduction environment...
Thank's
--LUCA -
Is my test suite configured properly?
I am building db-4.6.19 from source on Novell SuSE Linux 9.2 on ia64 (Intel Itanium2) hardware, then running the test suite with tclsh 8.4. Here’s my configuration line:
cd build_unix
../dist/configure prefix=/home/mark/WORKAREAS/webplatform/build/dev/20070924 with-test --with-tcl=/usr/lib
I run make and that goes well, then I run the standard test in tclsh via:
echo 'info tclversion; source ../test/test.tcl; run_std; exit' | tclsh; tail ALL.OUT
Most of the tests pass, but the tail of ALL.OUT is at db-4.6.19/build_unix/ALL.OUT (below) produces many error messages for a few tests. They seem to be variations on the same issue, and I’m not sure how to fix it at this point – can someone point me to help? It appears that a command isn't parsing properly.
I'm not sure how to troubleshoot further, it seems to be a syntax error.
Thanks!
eval test119 -hash "119" ; verify_dir ./TESTDIR "" 1
FAIL:14:29:10 (00:00:00) run_method: -hash test119: bad command "handles": must be dbremove, dbrename, env, envremove, open, sequence, version, rand, random_int, srand, or debug_check
eval test120 -hash "120" ; verify_dir ./TESTDIR "" 1
FAIL:14:29:10 (00:00:00) run_method: -hash test120: bad command "handles": must be dbremove, dbrename, env, envremove, open, sequence, version, rand, random_int, srand, or debug_check
eval test121 -hash "121" ; verify_dir ./TESTDIR "" 1
FAIL:14:29:10 (00:00:00) run_method: -hash test121: bad command "handles": must be dbremove, dbrename, env, envremove, open, sequence, version, rand, random_int, srand, or debug_check
eval test122 -hash "122" ; verify_dir ./TESTDIR "" 1
FAIL:14:29:10 (00:00:00) run_method: -hash test122: bad command "handles": must be dbremove, dbrename, env, envremove, open, sequence, version, rand, random_int, srand, or debug_check
Regression Tests Failed
Test suite run completed at: 14:29 09/24/07Thanks, I have built in a new directory and ammended my configuration line to:
../dist/configure prefix=/home/mark/WORKAREAS/webplatform/build/dev/20070926 with-test enable-tcl with-tcl=/usr/lib
Results in:
checking build system type... ia64-unknown-linux-gnu
checking host system type... ia64-unknown-linux-gnu
checking if building in the top-level or dist directories... no
checking if --disable-cryptography option specified... no
checking if --disable-hash option specified... no
checking if --disable-mutexsupport option specified... no
checking if --disable-queue option specified... no
checking if --disable-replication option specified... no
checking if --disable-statistics option specified... no
checking if --disable-verify option specified... no
checking if --enable-compat185 option specified... no
checking if --enable-cxx option specified... no
checking if --enable-debug option specified... no
checking if --enable-debug_rop option specified... no
checking if --enable-debug_wop option specified... no
checking if --enable-diagnostic option specified... no
checking if --enable-dump185 option specified... no
checking if --enable-java option specified... no
checking if --enable-mingw option specified... no
checking if --enable-fine_grained_lock_manager option specified... no
checking if --enable-o_direct option specified... no
checking if --enable-posixmutexes option specified... no
checking if --enable-pthread_api option specified... no
checking if --enable-rpc option specified... no
checking if --enable-smallbuild option specified... no
checking if --enable-tcl option specified... yes
checking if --enable-test option specified... no
checking if --enable-uimutexes option specified... no
checking if --enable-umrw option specified... no
checking if --with-mutex=MUTEX option specified... no
checking if --with-tcl=DIR option specified... /usr/lib
checking if --with-uniquename=NAME option specified... no
checking for chmod... chmod
checking for cp... cp
checking for ln... ln
checking for mkdir... mkdir
checking for rm... rm
checking for sh... /bin/sh
checking for a BSD-compatible install... /usr/bin/install -c
checking for cc... cc
checking for C compiler default output file name... a.out
checking whether the C compiler works... yes
checking whether we are cross compiling... no
checking for suffix of executables...
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether cc accepts -g... yes
checking for cc option to accept ISO C89... none needed
checking for an ANSI C-conforming const... yes
checking for inline... inline
checking whether we are using gcc version 2.96... no
checking whether g++ requires -fhandle-exceptions... no
checking for a sed that does not truncate output... /usr/bin/sed
checking for grep that handles long lines and -e... /usr/bin/grep
checking for egrep... /usr/bin/grep -E
checking for ld used by cc... /usr/ia64-suse-linux/bin/ld
checking if the linker (/usr/ia64-suse-linux/bin/ld) is GNU ld... yes
checking for /usr/ia64-suse-linux/bin/ld option to reload object files... -r
checking for BSD-compatible nm... /usr/bin/nm -B
checking whether ln -s works... yes
checking how to recognise dependent libraries... pass_all
checking how to run the C preprocessor... cc -E
checking for ANSI C header files... yes
checking for sys/types.h... yes
checking for sys/stat.h... yes
checking for stdlib.h... yes
checking for string.h... yes
checking for memory.h... yes
checking for strings.h... yes
checking for inttypes.h... yes
checking for stdint.h... yes
checking for unistd.h... yes
checking dlfcn.h usability... yes
checking dlfcn.h presence... yes
checking for dlfcn.h... yes
checking for g77... no
checking for xlf... no
checking for f77... no
checking for frt... no
checking for pgf77... no
checking for cf77... no
checking for fort77... no
checking for fl32... no
checking for af77... no
checking for xlf90... no
checking for f90... no
checking for pgf90... no
checking for pghpf... no
checking for epcf90... no
checking for gfortran... no
checking for g95... no
checking for xlf95... no
checking for f95... no
checking for fort... no
checking for ifort... no
checking for ifc... no
checking for efc... no
checking for pgf95... no
checking for lf95... no
checking for ftn... no
checking whether we are using the GNU Fortran 77 compiler... no
checking whether accepts -g... no
checking the maximum length of command line arguments... 131072
checking command to parse /usr/bin/nm -B output from cc object... ok
checking for objdir... .libs
checking for ar... ar
checking for ranlib... ranlib
checking for strip... strip
checking if cc supports -fno-rtti -fno-exceptions... no
checking for cc option to produce PIC... -fPIC
checking if cc PIC flag -fPIC works... yes
checking if cc static flag -static works... yes
checking if cc supports -c -o file.o... yes
checking whether the cc linker (/usr/ia64-suse-linux/bin/ld) supports shared libraries... yes
checking whether -lc should be explicitly linked in... no
checking dynamic linker characteristics... cat: /etc/ld.so.conf.d/*.conf: No such file or directory
GNU/Linux ld.so
checking how to hardcode library paths into programs... immediate
checking whether stripping libraries is possible... yes
checking if libtool supports shared libraries... yes
checking whether to build shared libraries... yes
checking whether to build static libraries... yes
configure: creating libtool
appending configuration tag "CXX" to libtool
appending configuration tag "F77" to libtool
checking SOSUFFIX from libtool... .so
checking MODSUFFIX from libtool... .so
checking JMODSUFFIX from libtool... .so
checking whether stat file-mode macros are broken... no
checking whether time.h and sys/time.h may both be included... yes
checking for dirent.h that defines DIR... yes
checking for library containing opendir... none required
checking sys/select.h usability... yes
checking sys/select.h presence... yes
checking for sys/select.h... yes
checking sys/socket.h usability... yes
checking sys/socket.h presence... yes
checking for sys/socket.h... yes
checking sys/time.h usability... yes
checking sys/time.h presence... yes
checking for sys/time.h... yes
checking for struct stat.st_blksize... yes
checking for inttypes.h... (cached) yes
checking for stdint.h... yes
checking stddef.h usability... yes
checking stddef.h presence... yes
checking for stddef.h... yes
checking for unistd.h... (cached) yes
checking for char... yes
checking size of char... 1
checking for unsigned char... yes
checking size of unsigned char... 1
checking for short... yes
checking size of short... 2
checking for unsigned short... yes
checking size of unsigned short... 2
checking for int... yes
checking size of int... 4
checking for unsigned int... yes
checking size of unsigned int... 4
checking for long... yes
checking size of long... 8
checking for unsigned long... yes
checking size of unsigned long... 8
checking for long long... yes
checking size of long long... 8
checking for unsigned long long... yes
checking size of unsigned long long... 8
checking for char *... yes
checking size of char *... 8
checking for u_char... yes
checking for u_short... yes
checking for u_int... yes
checking for u_long... yes
checking for u_int8_t... yes
checking for u_int16_t... yes
checking for int16_t... yes
checking for u_int32_t... yes
checking for int32_t... yes
checking for u_int64_t... yes
checking for int64_t... yes
checking for FILE... yes
checking for off_t... yes
checking for pid_t... yes
checking for size_t... yes
checking for time_t... yes
checking for size_t... (cached) yes
checking size of size_t... 8
checking for ssize_t... yes
checking for uintmax_t... yes
checking for uintptr_t... yes
checking for socklen_t... yes
checking for ANSI C exit success/failure values... yes
checking for mutexes... POSIX/pthreads/library
checking for main in -lpthread... yes
checking for library containing sched_yield... none required
checking for library containing fdatasync... none required
checking for library containing getaddrinfo... none required
checking for library containing hstrerror... none required
checking for main in -lm... yes
checking for main in -lnsl... yes
checking for main in -lpthread... (cached) yes
checking for main in -lsocket... no
checking for abort... yes
checking for atoi... yes
checking for atol... yes
checking for getcwd... yes
checking for getenv... yes
checking for getopt... yes
checking for isalpha... yes
checking for isdigit... yes
checking for isprint... yes
checking for isspace... yes
checking for memcmp... yes
checking for memcpy... yes
checking for memmove... yes
checking for printf... yes
checking for qsort... yes
checking for raise... yes
checking for rand... yes
checking for strcasecmp... yes
checking for strcat... yes
checking for strchr... yes
checking for strdup... yes
checking for strerror... yes
checking for strncat... yes
checking for strncmp... yes
checking for strrchr... yes
checking for strsep... yes
checking for strtol... yes
checking for strtoul... yes
checking for _fstati64... no
checking for directio... no
checking for fchmod... yes
checking for fclose... yes
checking for fcntl... yes
checking for fdatasync... yes
checking for fgetc... yes
checking for fgets... yes
checking for fopen... yes
checking for fwrite... yes
checking for getaddrinfo... yes
checking for getrusage... yes
checking for gettimeofday... yes
checking for getuid... yes
checking for hstrerror... yes
checking for localtime... yes
checking for mprotect... yes
checking for pstat_getdynamic... no
checking for pthread_yield... yes
checking for sched_yield... yes
checking for select... yes
checking for sigaction... yes
checking for snprintf... yes
checking for stat... yes
checking for strftime... yes
checking for sysconf... yes
checking for time... yes
checking for vsnprintf... yes
checking for yield... no
checking for ftruncate... yes
checking for clock_gettime... no
checking for ctime_r... yes
checking for 2 or 3 argument version of ctime_r... 2-argument
checking for pread... yes
checking for pwrite... yes
checking for fcntl/F_SETFD... yes
checking for special C compiler options needed for large files... no
checking for FILEOFFSET_BITS value needed for large files... no
checking for mlock... yes
checking for munlock... yes
checking for mmap... yes
checking for munmap... yes
checking for shmget... yes
checking for existence of /usr/lib/tclConfig.sh... loading
checking for 64-bit integral type support for sequences... yes
configure: creating ./config.status
config.status: creating Makefile
config.status: creating db_cxx.h
config.status: creating db_int.h
config.status: creating clib_port.h
config.status: creating include.tcl
config.status: creating db.h
config.status: creating db_config.h
So I believe configuration is fine. Make also completes fine.
~/WORKAREAS/webplatform/build/dev/20070926/db-4.6.19/build_unix> tclsh
% source ../test/test.tcl
% run_std
Test suite run started at: 09:51 09/26/07
Berkeley DB 4.6.19: (August 10, 2007)
Running environment tests
Running archive tests
Running backup tests
Running file operations tests
Running locking tests
Running logging tests
Running memory pool tests
Running transaction tests
Running deadlock detection tests
Running subdatabase tests
Running byte-order tests
Running recno backing file tests
Running DBM interface tests
Running NDBM interface tests
Running Hsearch interface tests
Running secondary index tests
Running replication tests
Running security tests
Running recovery tests
FAIL:09:51:09 (00:00:00) r: recd: bad command "handles": must be dbremove, dbrename, env, envremove, open, sequence, version, rand, random_int, srand, or debug_check
Running join test
Running btree tests
Running rbtree tests
Running queue tests
Running queueext tests
Running recno tests
Running frecno tests
Running rrecno tests
Running hash tests
UNEXPECTED OUTPUT: Berkeley DB 4.6.19: (August 10, 2007)
UNEXPECTED OUTPUT: FAIL:09:51:06 (00:00:00) r: env: bad command "handles": must be dbremove, dbrename, env, envremove, open, sequence, version, rand, random_int, srand, or debug_check
UNEXPECTED OUTPUT: FAIL:09:51:06 (00:00:00) r: archive: bad command "handles": must be dbremove, dbrename, env, envremove, open, sequence, version, rand, random_int, srand, or debug_check
UNEXPECTED OUTPUT: FAIL:09:51:06 (00:00:00) r: backup: bad command "handles": must be dbremove, dbrename, env, envremove, open, sequence, version, rand, random_int, srand, or debug_check
UNEXPECTED OUTPUT: FAIL:09:51:06 (00:00:00) r: fop: bad command "handles": must be dbremove, dbrename, env, envremove, open, sequence, version, rand, random_int, srand, or debug_check
UNEXPECTED OUTPUT: FAIL:09:51:06 (00:00:00) r: lock: bad command "handles": must be dbremove, dbrename, env, envremove, open, sequence, version, rand, random_int, srand, or debug_check
...and so on to the end. This seems to be the same issue I'd originally reported in this thread. Is there anything else I can do to troubleshoot this? I'm not a tcl expert, unfortunately.
Thanks again for your help or pointers!
--Mark -
After 3.5 upgrade - what to test?
Hello,
We are upgrading from BW 3.0B to BW 3.5 latest patch.
Once the upgrade is complete and turned over to the BW team, what sort of things should be tested?
In terms of Bex, and data extraction/loading/business Content etc.
Does anyone have suggestions/scripts or helpful resources that they have used or followed to test an upgrade?
Thanks!
NickIn fact you should do lots of checks before the upgrade also. There is a guide in service marketpplace and you can check that.
Some routine checks.
Before upgrade;
1. RSRV
2. release all transports and freezing any subsequent development and locking the user accounts.
3. Usually we do the upgrade in a sandbox environment test and then do the dev. q and prod environments.
After upgrade:
1. test check whether all the cubes and routine are in good shape.
2. al reports run the way it used to run.
Warning: Some items will break and wor with your Basis team.
Ravi Thothadr -
Windows 2008 R2 DC Recovery Test BSOD on boot
Hello everyone.
first off i have a perfectly healthy production domain with 2 domain controllers and replication occurring between them, the only issue is there on the same site.
i am in a test environment testing a scenario where both DC's are broken and i need to build a new one on new hardware, and all i have is a system state backup.
following the instructions,
Build a new machine closely matching the original machine hardware, (RAM and HDD size)
install 2008r2. allocate static IP address and reboot in DRM
add windows feature, windows server backup
select restore in WSB and select the system state.
DC then restores the data and askes to be rebooted.
after the reboot i get a BSOD on boot and its seems to occur when to loading CLASSPNP.SYS file any suggestions on how to resolve this?
also is there a way of building a DC, using DCPRMO then restoring the data to another location and then importing it into active directory.
basically i need a plan that can be used in a total disaster scenario with a off site backup of the system state / backup as my only options.
cheers
GordonHave you seen? Also make sure to install the exactly same operating system and service pack level
http://support.microsoft.com/kb/249694/
Enfo Zipper
Christoffer Andersson – Principal Advisor
http://blogs.chrisse.se - Directory Services Blog -
TFS 2012 Test Agent keep alternating status Online/Disconnected
TFS 2012 environment
Test Controller and Agent Update 3.
We are using a shadow account to connect the Agent (not on domain) to the Controller (on domain).
The problem I'm having is that the Test Agent Status alternates constantly between Online and Disconnected.
I have all TCP and UDP ports open in the Firewall on the Agent and Controller (can't just turn it off because of policy settings).
I have created an environment in Lab Center and can kick off a Lab Template based build definition to execute an automated test. The process get to a point of wait for the the test environment to be ready, starts a test execution, and completes the
test run with a Warning: "An error occurred while communicating with Agent vstfs:///LabManagement/TestMachine/###", where ### is a three digit number.
If I just sit and watch the Test Agent status, with no test runs running, the status will show Online for about 10-15 seconds and then show Disconnected for about 30-60 seconds. And then it repeats.
Another thing to mention, the Test Controller for this effort was cloned from a previous Test Controller that was used for another project. The previous Controller was on the domain at the time of being cloned and had issues of being the same computer
name on the network and then other issues because the Controller was already assigned to a collection. Another person here went through a process of fixing by uninstalling the controller software and then deleting some XML files that were left behind
after uninstalling the controller software. I feel that a new controller should be created from scratch because I feel that there is still some conflicts between the old and new Controllers.
Any assistance would be greatly appreciated.I have additional information to supply to this thread for the benefit of others who may come across this issue.
The main issue of using the same version of Test Manager with the version of TFS you have running is the first part.
After this post, I added additional testing environments and virtual machines in SCVMM and started having the "Online/Disconnected" issue again. With more research, I figured out that I had to add the IP address and host name of
the Test Controller to the hosts file on the Test Agents (path to the
hosts file "C:\Windows\System32\drivers\etc\"). Just open this file in NotePad (or the editor of your choice) and add a line at the end with the IP and host name.
Example hosts file:
# Copyright (c) 1993-2009 Microsoft Corp.
# This is a sample HOSTS file used by Microsoft TCP/IP for Windows.
# This file contains the mappings of IP addresses to host names. Each
# entry should be kept on an individual line. The IP address should
# be placed in the first column followed by the corresponding host name.
# The IP address and the host name should be separated by at least one
# space.
# Additionally, comments (such as these) may be inserted on individual
# lines or following the machine name denoted by a '#' symbol.
# For example:
# 102.54.94.97 rhino.acme.com # source server
# 38.25.63.10 x.acme.com # x client host
# localhost name resolution is handled within DNS itself.
# 127.0.0.1 localhost
# ::1 localhost
123.456.78.9 computername.domain.com
I have used this several times and it has worked every time (so far) where I use shadow accounts in the TFS test environments. Hope this helps! -
How to Registered Test Server Reports into Production Server
Hi,
Environment:
Test Server
OS: Oracle Linux
Application : R12.1.1
Production Server
OS: Oracle Linux
Application : R12.1.1
Q) I have registered Customized reports in Oracle Application R12.1.1 test Server.
Can I registered these reports into production Server with the help of FNDLOD or any other procedure?
Q)Without repeating registration steps in Production server.
Thank you,
Best Regards,Use OPENDATASET to opend the file in AS.
Use READ DATASET INTO ITAB to read the data to ITAB presented in AS .
Then use the Function module GUI_DOWNLOAD or WS_DOWNLOAD to down load the data from ITAB to PS.
And you can't do this in background.
Regards,
Amal -
Approximately How many seconds/milliseconds does a query with 4 - 5 table joins that is used in OLTP environment Execute( I'm checking in Oracle for the performance and the Query is called from JAVA / XML application.) so that the performance is optimized when it is called from the application.
Thanks,
GMThere is no single answer to that question. It depends on how big the tables are. It depends if there are proper indexes for what you are trying to do. It depends on the hardware. It depends on how many other users there are on the database.
The answer is less than 100 milliseconds. -
Say I have a star-lite schema with just 2 dimension tables and 1 fact table. The classic many-to-many relationship with a resolver/intersection table.
For discussion purposes, let's take the standard student/test/score tables. Student and Test are the dimensions and Score is the fact. In addition to capturing the test score, the requirement is to capture a snapshot of student and test (dimension) attributes when the Score is recorded.
This would mean duplicating the Student and Test attributes in the Score fact table, right? Are there any other ways to model this requirement?
ThanksVANJ wrote:
. In addition to capturing the test score, the requirement is to capture a snapshot of student and test (dimension) attributes when the Score is recorded.Seems like a faintly odd requirement but anyway.
This would mean duplicating the Student and Test attributes in the Score fact table, right? Denormalisation is a fairly common feature of data warehouses. We trade more space for speedier queries.
Are there any other ways to model this requirement? Are there any other ways to denormalise attributes without duplicating them? No. But there is more than one way of implementing the desired outcome. For instance, instead of storing additional attributes with the SCORE you could build a Materialized View which joins STUDENT, TEST and SCORE together. Then allow Query Rewrite to use the MView to satisfy pertinent queries rather than the underlying tables.
This clarification of the requirement was added after I posted:
The requirement is to record the current Student & Test attributes as of the time the Score is recorded.If the requirement is to maintain a history - and I suppose the clue really is in the question - then there is no alternative but to duplicate the STUDENT and TEST attributes in the SCORE table. In an OLTP environment I would suggest using a separate sub-system of journalling tables to track changes but that wouldn't make a lot of sense in a SWH context.
Cheers, APC
Edited by: APC on Apr 26, 2013 2:04 PM -
Adding a .smi File to a .mov quicktime movie
I have an unsubbed Quicktime movie and both .smi and .srt files containing subs for the movie. How can I attach the subs in the .smi or .srt files to the .mov file? I also have the video in .mkv format, and used the application SmartConverter to change it into a quicktime movie. Is there a way to add the .smi or .srt file to the .mkv file then convert it into a quicktime movie or something?
Hi,
There is also a Myth buster from Paul Randal which says you do not need to keep tempdb data files equal to processor core
Myth-Tempdb data file should always have one data file per processor core
But if you read Microsoft blogs it would say you should have tempdb files as per cores.
Tempdb best prctices
It all boils down to fact that are you facing issue with Tempdb. Does recommendation that tempdb data file equal to logical core always holds true, no it does not always that is what Paul Randal tried to show in his Article. You start with two data files
on different drive and monitor performance of Tempdb if you see PFA,GAM SGAM contention adding a data file would help. These recommendations generally hold true for huge OLTP environment where MS tested various features and your environment might not be same
as the one tested. Its a Good practice not an ideal. Ideal is one that suits your requirement and that would require a testing.
SQL Server tempddb Number of files a raw truth
If you want to Identify contention in Tempdb use below link
Identifying Contention In Tempdb
Recommendation to reduce allocation contention in Tempdb
Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it.
My TechNet Wiki Articles -
Reg: Exadata and /*+ FULL */ hint -
Hi Experts,
Recently, our database got migrated to Exadata environment, and a DBA told me that using the /*+ FULL */ hint in the query increases the query performance.
Doubt -
1) Does it actually enhance performance?
2) I read some articles and got some information that Exadata does some kind of "Smart Scan" and "Cell Offloading" which makes the query efficient. But how does FULL hint contribute here?
This links talks something about this, but not sure if correct - Some Hints for Exadata SQL Tuning - Part III - SQL Optimizer for Oracle - SQL Optimizer for Oracle - Toad World
Please share your thoughts and advise.
Thanks and Regards,
-- Ranit
( on Oracle 11.2.0.3.0 - Exadata )Ranit -
Lots of good advice given by others. A little more to add to the comments already made...
Using a full hint as a general tuning rule on Exadata would not be a good idea, just like the sometimes proposed notion of dropping all indexes on Exadata to performance is not a good idea. As Mohamed mentions, a key performance optimization for Exadata are the smart scans, which do require direct path reads. Pushing for smart scans is what drives these types of ideas; because, other than the index fast full scan, index scans will not smart scan. However, smart scanning isn't always faster. OLTP type queries that are looking for one or two rows out of many are still usually faster with an index even on Exadata. If you find using a hint like FULL does improve a query's performance, then just as with using hints in general, it's better to determine why the optimizer is not picking the better execution plan, a full table scan in this case, in the first place; and resolve the underlying issue.
What you will probably find is you are over-indexed on Exadata. If you have control of the indexes in your environment, test by making certain indexes invisible and seeing if that helps performance. Indexes that were created to eliminate a percentage, even a large percentage, of rows, but not almost all rows for queries are candidates to be dropped. You definitely want to tune for direct path reads.
This is done by doing index evaluations as described; making sure your stats are accurate and up-to-date; as mentioned by Franck, be sure to gather the Exadata system stats - as this is the only thing that helps the optimizer be Exadata aware. And also, especially if you are running a data warehouse workload, you can look into using parallelism. Running queries in parallel, often even with a degree as little as 2, will help prompt the optimizer to favor direct path reads. Parallelism does need to be kept in check. Look into using the DBRM to help control parallelism - possibly even enabling parallel statement queuing.
Hopefully these will give you some ideas of things to look at as you enter the realm of SQL Tuning on Exadata.
Good luck!
-Kasey -
Three questions on patching using opatch
Version: 10gR2, 11g
OS : AIX 6.1
1. In a Non-RAC environment,when using the opatch utility, Should i shutdown the instance running from the ORACLE_HOME which is being patched?
2. We have an 11g oracle instance running for a test OLTP environment. The datawarehousing guys wanted an 11g instance in this machine. So i used DBCA to create another database from the same ORACLE_HOME of the OLTP database. The inti.ora parameters for DSS and OLTP were different , this is why i had to create a separate instance. So, when i use the opatch to apply a patch, the changes will be reflected on both our DSS and OLTP databases. Right?
3.
On Re: Opatch vs upgradation patch set
Rober geir says
For rollups opatch is used (e.g 10.2.0.4.0 to 10.2.0.4.1)
For upgrades (e.g 10.1 to 10.2) then OUI is used (not opatch)
What does the term rollup mean?1. Yes, you should shutdown instances running on that home. The readme.txt should confirm this, and when you run opatch it will ask if you have shutdown.
2. opatch only patches the ORACLE_HOME, which is used by both databases. Check the patch readme.txt if there are any database steps (which will need to be run on both databases)
3. small bugs are fixed by small patches. To avoid having to apply many small patches Oracle groups them into larger patches which include multiple small patches. Often you will need to use the "napply" parameter for opatch when you apply these patches.
Always read the patch documentation before you apply it to be sure you understand what it does.
Always run the patch on a test environment before you run on PROD.
Maybe you are looking for
-
Does anyone knows how to restore a Macbook Pro?
-
Managing multiple itunes accounts on a single computer
I have an iPod and an iPad that use a single iTunes account on a certain PC. I have a second iPod that I have setup on a second computer which also has its own separate iTunes account (because at the time I didn't understand you can manage multiple d
-
How to create subfolders in a form?
Hi, I have two folders in my form. Inside each folder i want to create 2 new subfolder. I've tried to do this by associating a panel number to folders in screen painter but i obtain only one level of folder. Can you help me? Thanks.
-
Best way to stream MP3 files with ColdFusion
I was looking for a kind of a built in player look in that all the user has to do is click and it will start playing within the browser. Some of the files are even 48MB long so if I can reduce that too that would be good. client goes on conferences a
-
I need a simple MDI application
I need a simple MDI application...