Migrate BI+ Workspace content between production and test (S9203)

Is there an (easy) solution to migrate contents between production and test in BI+ Workspace S9.2.9.0.3? I know I could extract the contents and then just import but that way I lose the object permissions. BI+ migration utility only supports migrations from older versions and not between the same version.

That isn't a problem, if you do it this way:
have all your environment specific configurations (e.g. translation.com password and connector data) in OSGI configuration nodes in the repository.
Use runmodes to pin certain configurations to a certain runmode.
Have dedicated runmodes for PROD and for TEST
In that case you could update your TEST environment like this:
Restore the instance from PROD backup
Change the runmode definition in the start script
Start it up.
Jörg

Similar Messages

  • Copy data between productive and test system

    Hi everybody,
    I need to copy data from one cube in productive system into the same cube in the test system.
    How can I do it?
    Regards
    Erwin

    If you need to copy data from one system to another maybe you can do a Data Refresh or a SAP_DATA System Copy.
      If you wan't only to copy data from 1 Cube to another in other system you shoul'd try transforming the data on the Cube to a Report or maybe an ODS and then dowload it to a CVS archive, and then make a Transformation From PC_FILE to ODS and Cube o from PC_FILE to Cube.
    Hope it's work.
    Grettings.
    Ignacio.

  • How to sinck data between  production and R12 instance

    Hi,
    I am working on Oracle Apps up-gradation project.
    Now I have upgrade 11.5.7 to R12 (12.0.4) successfully. I started up-gradation task with 1 May of production data (backup) on new machine.
    Client wants to live R12 from 1/August/2009.
    Please suggest how to sinck data between production and R12 instance.
    Thanks
    Anup

    Hi,
    It is not possible to do what you propose (sync data from an 11.5.7 system to a 12.0.4 system). You will need to re-run the upgrade against the August 1 data, following your original upgrade procedures.
    Regards,
    John P.
    http://only4left.jpiwowar.com

  • DataSource / Structures mismatched between Dev and Test Systems. ..!!

    Hi,
    We are doing a scenario. Where XI will update the data into PSA through ABAP proxy.
    Scenario worked perfectly in development system.
    We transported the objects from Dev to test. The Strucures are mismatched in SE11 between development and Test systems as below.
    Data sources (ZDS_RECIHDR and ZDS_RECTPALL) looks ok. But when I saw the structures in SE11 they are not correct, they got mismatched.
    Development:
    /BIC/CQZDS_REC00001000 - Header (ZDS_RECIHDR)
    /BIC/CQZDS_REC00003000 - Allocation (ZDS_RECTPALL)
    Test:
    /BIC/CQZDS_REC00001000 - Allocation (ZDS_RECTPALL)
    /BIC/CQZDS_REC00003000 - Header (ZDS_RECIHDR)
    Kindly let me know where it might have gone wrong?
    Thanks
    Deepthi

    We done it already. Still it is failing.
    While transporting, it is failing and showing the error as
    Program ZPI_CL_IA_PAYMENT_ALLOCATION1=CP, Include ZPI_CL_IA_PAYMENT_ALLOCATION1=CM001: Syntax error in line 000016
    The data object 'L_S_DATA' has no component called'/BIC/ZSALENUM', but there is a component called
    Program ZPI_CL_IA_PAYMENT_HEADER======CP, Include ZPI_CL_IA_PAYMENT_HEADER======CM001: Syntax error in line 000016
    The data object 'L_S_DATA' has no component called'/BIC/ZTRANDATE', but there are the following com
    The Structure is mismatched in SE11 between header and allocation structures. That is the reason it is failing.
    Any more ideas pls?

  • How to place content between header and tabs?????

    i have header part which has to be constant through out the portal but below that i have 3 links
    like I AM employee,employer,broker..
    which has to be shown only in home page above tabs..
    how can i achieve this..
    how to place content between header and tabs..:(kindly help..

    Hi Samiran
    Try these approaches and see if that works.
    1. In the Header Section, you header footer shell and add a Header Portlet. This Header Portlet associated JSP file will have all static content in the top section. In the bottom section, add these 3 links say to right hand corner. Show these links only based on some request property like isHome. Now for the main book having Home and other page associate a BackingFile. Within this backing file in the lifecycle methods like preRender or handlePostBack, get instance of BookManager and all the pages and see which page is Active. For that active page check its page definition label which will be always unique. IF the page def label is like home_page_def (this is page def label you give for home page), then set the key value in the request property like isHome=true. By only doubt is after Book backingfile is triggered, the header has to be reloaded, because only then it can pick up the request attributes.
    2. Create a brand new portlet like HomePageLinks portlet. Make its Title Property Not Visible, and other user interface properties like NoBorder, NoTheme etc. The associated JSP will have the 3 links you mentioned right aligned. You can use css styles to make it right etc. Now drop this portlet in the Header Shell area. You already have HeaderPortlet in the top, below that you will have this HomePageLinks portlet. Now associate a backing file for this Portlet to show, only if the Books current active page is Home page by comparing the def label etc as mentioned above.
    In both scenarios, only concern is when we click on different Pages, the entire portal has to be rendered right from the Top Header. Only then the backing file will set the key, and the HomePageLinks portlet can show or hide accordingly.
    Try firing an Event when the Home page is clicked. This listener for this Event can be the HomePageLinks Portlet. I guess Event mechanism should work irrespective of where the portlet is placed. In the event listner, see if you can show/hide this portlet.
    The only challenge is Header section needs to reloaded everytime you click on a Tab.
    Start putting some backing files and System.out.printlns to see if the Header section gets reloaded on click on any Tabs.
    These are just my thoughts over the top of my head. Other forum users can have better alternatives or a different version of above approaches.
    Thanks
    Ravi Jegga

  • SQL Server Gateway Licensing - Production and Test

    I have a client who has two databases that need to use the SQL Server Gateway - production and test (two separate servers).
    As the gateway is separately licensed and can be installed on a separate machine from the database server am I right in saying that only one license is needed for the Gateway
    and that both test and production databases can use it?
    The Oracle Partner that they use is telling them that they need to buy two Gateway licences.
    Can any one help? Thanks!

    Hello User,
    You can find the application under below path
    Domain_name - Expand "Environment" and Select " Deployments " - here your application will show which are deployed in this domain.
    Please refer -http://docs.oracle.com/cd/E13222_01/wls/docs100/intro/console.html
    Regards
    Laksh

  • Error: Interlock between production and Sales Department .

    Hi All,
    Error: Interlock between production and Sales Department . Please give me solution of this .
    Regads,
    SAP SD user

    Hi,
    There are lot of integrations between PP&SD
    naming few MTO,SOP process,batch management......

  • HD content between MacBook and iMac

    I searched around the forums and I couldn't find anything that answered this exact question. Its a bit of a tough one im sure, so here goes.
    Recently I purchased a new 13" MacBook Pro and used the automatic data transfer to set up the machine, so it has the the exact same programs as the iMac that I purchased a year ago or so. Now, I purchased a few HD TV shows on the macbook to test a way to transfer the shows between computers. I know how to do the iPod transfer method (moving content to iPod and then switching machines and checking for purchased content) But that method only worked for the standard def version of the show.
    The Question: How do I get the HD version to move from macbook to imac and vice versa? Is there a method or will the standard just have to do? This is not urgent, Im just curious.

    The HD version is not synced to your iPod as it cannot be played there.
    You'll have to connect your two Macs together or use an external HD or flash drive to transfer the HD version.
    See this: How to use FireWire target disk mode, http://support.apple.com/kb/HT1661

  • Production and Test Servers

    What is purpose of having a test and production servers ? in devlopment for example ... exactly what is the scenarion of making a benifit of them?
    is there in interaction between the two servers?

    i need to know why we have a test serverThe test server is mainly used to test the deployoment script which are developed by the development team and do proper functional/testing after you apply patches.
    how a test server and why its used during implementation ? only for batching?To verify the setup during the implmentation and to test the patches (usually, there is a separate instance to test the patches, and once the patches are applied successfully it will be promoted to the test server for testing before it goes to production).
    and what things i can not do to the production be4 i test them?If all the testing is done successfully on the test instance, it goes to UAT instance (for another testing) before it goes to production. For production, just make sure you have a valid backup before you patch the instance or move any setup.
    and if iam to clone a prod to test ....is it a backup only?Run preclone on production, copy/restore the files to the test server and run postclone on the test instance -- I believe you are aware of the cloning docs.
    and another thing after the cloning will they have the same instance name?No, production name is usually PROD and test instances is usually called TEST.
    Thanks,
    Hussein

  • Is it possible to move content between private and public sites after publishing the latter?

    Hi. Is anyone else in this position? My institution kept its original iTunes U site after migrating our public content to the new public site last year. We now have separate public and private (university log-in only) sites and must maintain both. The Public Site Administrator allowed the copying of existing content from the original iTunes U site to the new public site, but ONLY prior to publication of the public site. I have two related questions for the community:
    1) How is it possible to maintain a workflow in which faculty create a course for the private iTunes U site (wanting to keep the content restricted to our students while the course is live), but then move the content over to the public site once the semester ends?
    2) With the new app for Courses and the Course Manager on the public site, what do we do with the courses already created by faculty as Collections (either in the public or private sites)?
    I hope there are folks out there that can help.
    Thanks!
    Kevin

    I'd like to see a discussion on this exact topic as well. Have you determined yet how to move a private course (apple hosted) to the public site? Can you use an RSS feed from the private?
    joe

  • Export java code out of database?  difference between Prod and Test

    Hi all,
    I'm a seasoned DBA, but I have never really dealt with Java, so this is all new to me. Within one of my test databases (10.2.0.3.0), I have a database package - within the package is a procedure that has this Java call:
    language java name 'com.clinapps.pmd.etl.interfaces.forecasting.itemtrial.helper.SSFItemOutboundPMDHelper.processSSFItemInterface()';
    This Java code goes and exports a file to .xml format. The basic problem is that in our Test environment, there is a problem in the xml file that gets generated - the second line is missing the first few characters. However, in all of our other environments, including Production and Validation, it all works fine - the xml file is good. What has me confused, is that we've done several database refreshes, where we export from production, then import it down to test. So from everything I'm looking at, the schema is exactly the same - but something is different, causing the issue in the xml file that is being generated. I was wondering if I could get into the database and export the Java code, then that is something where I could confirm if there's a difference or not, from Prod to Test.
    Does anyone have any input/feedback on how to do this? I've been googling, and tried several methods, including dbms_metadata.get_ddl, and DBMS_JAVA.EXPORT_SOURCE, but I can't get anything to work.
    Or if anyone has any other thoughts on how to debug/troubleshoot this, I'd appreciate any ideas!!
    Thank you from a Java newbie!!

    OK, that is helping me get somewhere!! If I change the query to this, I am getting results:
    select obj_num,long_name from SYS."KU$_JAVA_CLASS_VIEW" where LONG_NAME like '%SSFItemOutboundPMDHelper%';
    171879
    com/clinapps/pmd/etl/interfaces/forecasting/itemtrial/helper/SSFItemOutboundPMDHelper
    Note that it has the / instead of the . as the differentiator
    Now if I do the second query, I see this:
    SQL> select * from SYS."KU$_JAVA_OBJNUM_VIEW" where OBJ_NUM=171879;
    OBJ_NUM DATAOBJ_NUM OWNER_NUM OWNER_NAME
    NAME NAMESPACE SUBNAME
    TYPE_NUM TYPE_NAME CTIME MTIME
    STIME STATUS REMOTEOWNER
    LINKNAME
    FLAGS OID SPARE1 SPARE2 SPARE3
    SPARE4
    SPARE5
    SPARE6
    171879 123 PMD50
    /34bae3f1_SSFItemOutboundPMDHe 1
    29 JAVA_CLASS 2012-06-28 10:44:22 2012-06-28 10:44:22
    2012-06-28 10:44:22 1
    8 6 65535
    Any thoughts on where to go from here??

  • BDB read performance problem: lock contention between GC and VM threads

    Problem: BDB read performance is really bad when the size of the BDB crosses 20GB. Once the database crosses 20GB or near there, it takes more than one hour to read/delete/add 200K keys.
    After a point, of these 200K keys there are about 15-30K keys that are new and this number eventually should come down and there should not be any new keys after a point.
    Application:
    Transactional Data Store application. Single threaded process, that's trying to read one key's data, delete the data and add new data. The keys are really small (20 bytes) and the data is large (grows from 1KB to 100KB)
    On on machine, I have a total of 3 processes running with each process accessing its own BDB on a separate RAID1+0 drive. So, according to me there should really be no disk i/o wait that's slowing down the reads.
    After a point (past 20GB), There are about 4-5 million keys in my BDB and the data associated with each key could be anywhere between 1KB to 100KB. Eventually every key will have 100KB data associated with it.
    Hardware:
    16 core Intel Xeon, 96GB of RAM, 8 drive, running 2.6.18-194.26.1.0.1.el5 #1 SMP x86_64 x86_64 x86_64 GNU/Linux
    BDB config: BTREE
    bdb version: 4.8.30
    bdb cache size: 4GB
    bdb page size: experimented with 8KB, 64KB.
    3 processes, each process accesses its own BDB on a separate RAIDed(1+0) drive.
    envConfig.setAllowCreate(true);
    envConfig.setTxnNoSync(ourConfig.asynchronous);
    envConfig.setThreaded(true);
    envConfig.setInitializeLocking(true);
    envConfig.setLockDetectMode(LockDetectMode.DEFAULT);
    When writing to BDB: (Asynchrounous transactions)
    TransactionConfig tc = new TransactionConfig();
    tc.setNoSync(true);
    When reading from BDB (Allow reading from Uncommitted pages):
    CursorConfig cc = new CursorConfig();
    cc.setReadUncommitted(true);
    BDB stats: BDB size 49GB
    $ db_stat -m
    3GB 928MB Total cache size
    1 Number of caches
    1 Maximum number of caches
    3GB 928MB Pool individual cache size
    0 Maximum memory-mapped file size
    0 Maximum open file descriptors
    0 Maximum sequential buffer writes
    0 Sleep after writing maximum sequential buffers
    0 Requested pages mapped into the process' address space
    2127M Requested pages found in the cache (97%)
    57M Requested pages not found in the cache (57565917)
    6371509 Pages created in the cache
    57M Pages read into the cache (57565917)
    75M Pages written from the cache to the backing file (75763673)
    60M Clean pages forced from the cache (60775446)
    2661382 Dirty pages forced from the cache
    0 Dirty pages written by trickle-sync thread
    500593 Current total page count
    500593 Current clean page count
    0 Current dirty page count
    524287 Number of hash buckets used for page location
    4096 Assumed page size used
    2248M Total number of times hash chains searched for a page (2248788999)
    9 The longest hash chain searched for a page
    2669M Total number of hash chain entries checked for page (2669310818)
    0 The number of hash bucket locks that required waiting (0%)
    0 The maximum number of times any hash bucket lock was waited for (0%)
    0 The number of region locks that required waiting (0%)
    0 The number of buffers frozen
    0 The number of buffers thawed
    0 The number of frozen buffers freed
    63M The number of page allocations (63937431)
    181M The number of hash buckets examined during allocations (181211477)
    16 The maximum number of hash buckets examined for an allocation
    63M The number of pages examined during allocations (63436828)
    1 The max number of pages examined for an allocation
    0 Threads waited on page I/O
    0 The number of times a sync is interrupted
    Pool File: lastPoints
    8192 Page size
    0 Requested pages mapped into the process' address space
    2127M Requested pages found in the cache (97%)
    57M Requested pages not found in the cache (57565917)
    6371509 Pages created in the cache
    57M Pages read into the cache (57565917)
    75M Pages written from the cache to the backing file (75763673)
    $ db_stat -l
    0x40988 Log magic number
    16 Log version number
    31KB 256B Log record cache size
    0 Log file mode
    10Mb Current log file size
    856M Records entered into the log (856697337)
    941GB 371MB 67KB 112B Log bytes written
    2GB 262MB 998KB 478B Log bytes written since last checkpoint
    31M Total log file I/O writes (31624157)
    31M Total log file I/O writes due to overflow (31527047)
    97136 Total log file flushes
    686 Total log file I/O reads
    96414 Current log file number
    4482953 Current log file offset
    96414 On-disk log file number
    4482862 On-disk log file offset
    1 Maximum commits in a log flush
    1 Minimum commits in a log flush
    160KB Log region size
    195 The number of region locks that required waiting (0%)
    $ db_stat -c
    7 Last allocated locker ID
    0x7fffffff Current maximum unused locker ID
    9 Number of lock modes
    2000 Maximum number of locks possible
    2000 Maximum number of lockers possible
    2000 Maximum number of lock objects possible
    160 Number of lock object partitions
    0 Number of current locks
    1218 Maximum number of locks at any one time
    5 Maximum number of locks in any one bucket
    0 Maximum number of locks stolen by for an empty partition
    0 Maximum number of locks stolen for any one partition
    0 Number of current lockers
    8 Maximum number of lockers at any one time
    0 Number of current lock objects
    1218 Maximum number of lock objects at any one time
    5 Maximum number of lock objects in any one bucket
    0 Maximum number of objects stolen by for an empty partition
    0 Maximum number of objects stolen for any one partition
    400M Total number of locks requested (400062331)
    400M Total number of locks released (400062331)
    0 Total number of locks upgraded
    1 Total number of locks downgraded
    0 Lock requests not available due to conflicts, for which we waited
    0 Lock requests not available due to conflicts, for which we did not wait
    0 Number of deadlocks
    0 Lock timeout value
    0 Number of locks that have timed out
    0 Transaction timeout value
    0 Number of transactions that have timed out
    1MB 544KB The size of the lock region
    0 The number of partition locks that required waiting (0%)
    0 The maximum number of times any partition lock was waited for (0%)
    0 The number of object queue operations that required waiting (0%)
    0 The number of locker allocations that required waiting (0%)
    0 The number of region locks that required waiting (0%)
    5 Maximum hash bucket length
    $ db_stat -CA
    Default locking region information:
    7 Last allocated locker ID
    0x7fffffff Current maximum unused locker ID
    9 Number of lock modes
    2000 Maximum number of locks possible
    2000 Maximum number of lockers possible
    2000 Maximum number of lock objects possible
    160 Number of lock object partitions
    0 Number of current locks
    1218 Maximum number of locks at any one time
    5 Maximum number of locks in any one bucket
    0 Maximum number of locks stolen by for an empty partition
    0 Maximum number of locks stolen for any one partition
    0 Number of current lockers
    8 Maximum number of lockers at any one time
    0 Number of current lock objects
    1218 Maximum number of lock objects at any one time
    5 Maximum number of lock objects in any one bucket
    0 Maximum number of objects stolen by for an empty partition
    0 Maximum number of objects stolen for any one partition
    400M Total number of locks requested (400062331)
    400M Total number of locks released (400062331)
    0 Total number of locks upgraded
    1 Total number of locks downgraded
    0 Lock requests not available due to conflicts, for which we waited
    0 Lock requests not available due to conflicts, for which we did not wait
    0 Number of deadlocks
    0 Lock timeout value
    0 Number of locks that have timed out
    0 Transaction timeout value
    0 Number of transactions that have timed out
    1MB 544KB The size of the lock region
    0 The number of partition locks that required waiting (0%)
    0 The maximum number of times any partition lock was waited for (0%)
    0 The number of object queue operations that required waiting (0%)
    0 The number of locker allocations that required waiting (0%)
    0 The number of region locks that required waiting (0%)
    5 Maximum hash bucket length
    =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
    Lock REGINFO information:
    Lock Region type
    5 Region ID
    __db.005 Region name
    0x2accda678000 Region address
    0x2accda678138 Region primary address
    0 Region maximum allocation
    0 Region allocated
    Region allocations: 6006 allocations, 0 failures, 0 frees, 1 longest
    Allocations by power-of-two sizes:
    1KB 6002
    2KB 0
    4KB 0
    8KB 0
    16KB 1
    32KB 0
    64KB 2
    128KB 0
    256KB 1
    512KB 0
    1024KB 0
    REGION_JOIN_OK Region flags
    =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
    Lock region parameters:
    524317 Lock region region mutex [0/9 0% 5091/47054587432128]
    2053 locker table size
    2053 object table size
    944 obj_off
    226120 locker_off
    0 need_dd
    =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
    Lock conflict matrix:
    =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
    Locks grouped by lockers:
    Locker Mode Count Status ----------------- Object ---------------
    =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
    Locks grouped by object:
    Locker Mode Count Status ----------------- Object ---------------
    Diagnosis:
    I'm seeing way to much lock contention on the Java Garbage Collector threads and also the VM thread when I strace my java process and I don't understand the behavior.
    We are spending more than 95% of the time trying to acquire locks and I don't know what these locks are. Any info here would help.
    Earlier I thought the overflow pages were the problem as 100KB data size was exceeding all overflow page limits. So, I implemented duplicate keys concept by chunking of my data to fit to overflow page limits.
    Now I don't see any overflow pages in my system but I still see bad bdb read performance.
    $ strace -c -f -p 5642 --->(607 times the lock timed out, errors)
    Process 5642 attached with 45 threads - interrupt to quit
    % time     seconds  usecs/call     calls    errors syscall
    98.19    7.670403        2257      3398       607 futex
     0.84    0.065886           8      8423           pread
     0.69    0.053980        4498        12           fdatasync
     0.22    0.017094           5      3778           pwrite
     0.05    0.004107           5       808           sched_yield
     0.00    0.000120          10        12           read
     0.00    0.000110           9        12           open
     0.00    0.000089           7        12           close
     0.00    0.000025           0      1431           clock_gettime
     0.00    0.000000           0        46           write
     0.00    0.000000           0         1         1 stat
     0.00    0.000000           0        12           lseek
     0.00    0.000000           0        26           mmap
     0.00    0.000000           0        88           mprotect
     0.00    0.000000           0        24           fcntl
    100.00    7.811814                 18083       608 total
    The above stats show that there is too much time spent locking (futex calls) and I don't understand that because
    the application is really single-threaded. I have turned on asynchronous transactions so the writes might be
    flushed asynchronously in the background but spending that much time locking and timing out seems wrong.
    So, there is possibly something I'm not setting or something weird with the way JVM is behaving on my box.
    I grep-ed for futex calls in one of my strace log snippet and I see that there is a VM thread that grabbed the mutex
    maximum number(223) of times and followed by Garbage Collector threads: the following is the lock counts and thread-pids
    within the process:
    These are the 10 GC threads (each thread has grabbed lock on an avg 85 times):
      86 [8538]
      85 [8539]
      91 [8540]
      91 [8541]
      92 [8542]
      87 [8543]
      90 [8544]
      96 [8545]
      87 [8546]
      97 [8547]
      96 [8548]
      91 [8549]
      91 [8550]
      80 [8552]
    VM Periodic Task Thread" prio=10 tid=0x00002aaaf4065000 nid=0x2180 waiting on condition (Main problem??)
     223 [8576] ==> grabbing a lock 223 times -- not sure why this is happening…
    "pool-2-thread-1" prio=10 tid=0x00002aaaf44b7000 nid=0x21c8 runnable [0x0000000042aa8000] -- main worker thread
       34 [8648] (main thread grabs futex only 34 times when compared to all the other threads)
    The load average seems ok; though my system thinks it has very less memory left and that
    I think is because its using up a lot of memory for the file system cache?
    top - 23:52:00 up 6 days, 8:41, 1 user, load average: 3.28, 3.40, 3.44
    Tasks: 229 total, 1 running, 228 sleeping, 0 stopped, 0 zombie
    Cpu(s): 3.2%us, 0.9%sy, 0.0%ni, 87.5%id, 8.3%wa, 0.0%hi, 0.1%si, 0.0%st
    Mem: 98999820k total, 98745988k used, 253832k free, 530372k buffers
    Swap: 18481144k total, 1304k used, 18479840k free, 89854800k cached
    PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
    8424 rchitta 16 0 7053m 6.2g 4.4g S 18.3 6.5 401:01.88 java
    8422 rchitta 15 0 7011m 6.1g 4.4g S 14.6 6.5 528:06.92 java
    8423 rchitta 15 0 6989m 6.1g 4.4g S 5.7 6.5 615:28.21 java
    $ java -version
    java version "1.6.0_21"
    Java(TM) SE Runtime Environment (build 1.6.0_21-b06)
    Java HotSpot(TM) 64-Bit Server VM (build 17.0-b16, mixed mode)
    Maybe I should make my application a Concurrent Data Store app as there is really only one thread doing the writes and reads. But I would like
    to understand why my process is spending so much time in locking.
    Can I try any other options? How do I prevent such heavy locking from happening? Has anyone seen this kind of behavior? Maybe this is
    all normal. I'm pretty new to using BDB.
    If there is a way to disable locking that would also work as there is only one thread that's really doing all the job.
    Should I disable the file system cache? One thing is that my application does not utilize cache very well as once I visit a key, I don't visit that
    key again for a very long time so its very possible that the key has to be read again from the disk.
    It is possible that I'm thinking this completely wrong and focussing too much on locking behavior and the problem is else where.
    Any thoughts/suggestions etc are welcome. Your help on this is much appreciated.
    Thanks,
    Rama

    Hi,
    Looks like you're using BDB, not BDB JE, and this is the BDB JE forum. Could you please repost here?:
    Berkeley DB
    Thanks,
    mark

  • Playback is inconsistent between original and test disc

    I have created a DVD and just received the test disc. Before we sent it off to replication when we tested the DVD both in the simulator and the Apple DVD player everything worked as it should. We couldn't burn a disc ourselves because it was a DVD-9.
    So we have set button highlight markers so that when a commercial is playing you can hit enter and then jump to a longer commercial and then jump back to the original program.
    The problem is we can't pause the video in either one of those sections. The first commercial is on the main track and no user functions are disabled. The second part that you jump to is on another track where the ability to scan and chapter both forwards and backwards IS disabled... Nothing else.
    Well when we watch the test disc from replication we can't pause in either of those sections.
    I checked the project in DVDSP and everything is as it should be. When I simulate it I can pause. When I play the Video TS folder on the DVD player I can pause. SO why is the test disc different?
    I am assuming this is a problem on the DVDSP end and not on the replicator's part. So what do I do?

    I basically set up 1 track which is all of the main "play all" content. There's 3 stories set up for that track. So if you hit play all or either of the variations of the Play All we have created those are covered.
    The commercials are set up within those with their buttons. Now if you press enter while the button over video is there the pieces you jump to are on another track. There are also stories set up for that track that coincide with the stories on the main track.
    The only UOP that's disabled is on that second track. I disabled the scan forward/backward and the chapter forward/back. If I hadn't disabled them then you could scan forward to the next piece of video, which I did not want.
    I used no scripting.
    We have contacted another 3rd party replicator which backed up what our replicator said... That its an authoring issue.
    So we made some calls and found a professional authoring house. They seemed to feel that perhaps we hit the wall with what DVDSP can do with regards to the functionality we are looking for. They said that DVDSP basically over simplifies certain commands which require more in depth scripting and what not. They said we would need a program like Sonic to do what we needed. So that is where we are with that.
    On a side note I cut out half of our content and burned a disc to test on various players. And that disc was buggy as well, working on some players while not working on others.
    Some of the buttons over video worked for about half of the time while others didn't. Also, Some of the buttons which did work only worked for about 3/4 of the button. Meaning, if I had a 2 minute chapter with a button over video over the entire 2 minutes, the button only worked for the first minute and a half and for the last 30 seconds nothing happened when you activated the button.
    So the only progress I have made is that the problems lies on our end.

  • Production and Test deployment

    Hi all,
    I have a web services (Stateless java SOAP web service) and i want to have 2 different deployment. One for test and another one for production. I know how to use two different URI but now I also want that my program use a different configuration file in the two case.
    How can I pass argument to my program when it is deployed as a webservices ?
    Best regards
    FJJ

    Hello John,
    I am not sure to undertstand your issue correctly.
    1- You have deployed your service in 2 different environment (http://serverTest & http://serverProd )
    2- Now you want to be able to test it from your client application, right?
    So do do that if you are using a Java client (Web Service Proxy) you canuse the proxy.setEndPoint() method to chamge the endpoint you want to use. You canfor example configure that in a propertie file and for example use JMX to manipulate this information.
    Regards
    Tugdual Grall

  • Differnce between prt and testing equipment

    Hi friends ,
    how to assign a prt to a operation automatically.
    Also please let me know can i assign a equipment(testing fixture) to an operation and if yes how.

    Hi,
    PRTs belong to the group of operating resources. PRTs are involved in the production process. They are also used to support or to fulfill the prerequisites for performing a maintenance task.
    Possible PRTs are, for example, tools, measuring equipment, drawings, NC programs, cranes.
    You create the PRT as equipment by IE25 OR Choose Logistics-   Project System--   Basic data-----   Master data
      Equipment -
      Create. 
    You can assign your equipment PRT to routing operation.
    Hope it helps you.
    Regards,
    Alok Tiwari

Maybe you are looking for

  • Clicks and Pops Soundtrack Pro does not get rid of?

    How do I edit out (reduce) clicks, pops? (Soundtrack Pro's fix all does not take care of that!) What I call clicks (noise) show up in the sample editor as a straight line. I have seen videos of somone erase, shorten these noise-spikes in other applic

  • Incorrect link to version of PS in Prefs of LR4

    I have a Windows 7 Pro 64-bit OS I have Photoshop 12.1 [extended version]; Photoshop 13 Beta; Lightroom 4 I originally had Photoshop 12 [standard version] when I loaded Lightroom 3 and Lightroom 4.  When LR 4 beta was upgraded to the current version,

  • 8.1.3 Caused Safari Search Bar Issue

    I Upgraded to 8.1.3 this morning and now the Safari search bar has moved to the very top of my screen, merged with the battery and use information. This is making it very difficult to use Safari as it is almost impossible to click on the search bar t

  • Disable Organizational Data in VA01 Screen

    Dear all, Is there any way to disable the organizational data in VA01 from being populated automatically? The organizational data (Sales Area, Sales Office, Sales Group) will be populated based on the entries of the respective fields in a previous tr

  • Pas de détection module PXI dans MAX

    J'utilise un chassis PXI-1033 dans lequel j'ai installé deux modules PXI : le PXI-6682H de chez NI, et le PXI-9816 de chez ADLINK. Mon PC sous Windows 7 Pro 64 bits est connecté  au chassis par l'intermédiaire d'une ExpressCard. Dans le programme MAX