R/3 dataflow concurrent loading

Hi
In my enviornment, users will be executing batch jobs via webservices. The users will pass a fixed set of parameters to the web service job. The job will have an R/3 dataflow connected to an SAP GL source.
My question is how can I make the the data transport filename to be dynamic, since it does not accept global variables.
I have tried the following options
1. I can use a substitution parameter, but in my case the webservice (fixed parameters. We are using BO FIM 7.0 to call DS jobs. FIM passes only certain parameters to DS) call cannot pass a value for the substitution parameter and I cannot assign a value dynamically to substitution parameter within the job using the remaining parameters
2. I can use a direct table connect using the RFC_TABLE_READ but may not be efficient, since we will be accessing multiple SAP systems with lot of joins and lookups, I am not sure if this will scale up to the data volume. We have clear benchmarking results that we want to attain.
The DS SAP supplemental guide p.170 says that R/3 dataflows submitted concurrently will be serialized but will be run in parallel if the server has more than one batch processor available. It does not mention how the data transport file name is handled if the same job is handled concurrently.
How is the data transport file name handled, if the same job is being run concurrently by different users with different parameters.
Has anyone faced this issue ? Any work arounds would be helpful
We are using DS 3.1 on windows
Thanks
Dinesh

There was a fix in XI 3.1 ServicePack 1 to support global variables in data transport file names.

Similar Messages

  • DiskOrderedScan hangs under concurrent load

    Hi,
    I have been testing DOScan with live traffic simultaneously hitting the database. At high enough throughput (~700/s), the scan simply stalls. I have multiple threads doing live traffic (get + updates).
    one thread scanning the keys, with the following config. For each such key it obtains in the scan, it does a get()
    DiskOrderedCursorConfig docc = new DiskOrderedCursorConfig();
    docc.setInternalMemoryLimit(64 * 1024 * 1024);
    docc.setKeysOnly(true);
    docc.setMaxSeedMillisecs(500);
    I stack dumped the hung process. The following is the stack dump of DOProducer and scanning thread.
    "DiskOrderedScan Producer Thread for Thread[voldemort-niosocket-server2,5,main]" daemon prio=10 tid=0x00007f930498c000 nid=0x30d1 waiting on condition [0x00007f92fa25f000]
    java.lang.Thread.State: TIMED_WAITING (parking)
         at sun.misc.Unsafe.park(Native Method)
         - parking to wait for <0x00000002bae00078> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
         at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
         at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
         at java.util.concurrent.ArrayBlockingQueue.offer(ArrayBlockingQueue.java:287)
         at com.sleepycat.je.dbi.DiskOrderedCursorImpl$2.processLSN(DiskOrderedCursorImpl.java:415)
         at com.sleepycat.je.dbi.SortedLSNTreeWalker.callProcessLSNHandleExceptions(SortedLSNTreeWalker.java:581)
         at com.sleepycat.je.dbi.SortedLSNTreeWalker.processResidentChild(SortedLSNTreeWalker.java:488)
         at com.sleepycat.je.dbi.DiskOrderedCursorImpl$DiskOrderedCursorTreeWalker.processResidentChild(DiskOrderedCursorImpl.java:236)
         at com.sleepycat.je.dbi.SortedLSNTreeWalker.accumulateLSNs(SortedLSNTreeWalker.java:463)
         at com.sleepycat.je.dbi.DiskOrderedCursorImpl$DiskOrderedCursorTreeWalker.walkInternal(DiskOrderedCursorImpl.java:206)
         at com.sleepycat.je.dbi.SortedLSNTreeWalker.walk(SortedLSNTreeWalker.java:315)
         at com.sleepycat.je.dbi.DiskOrderedCursorImpl$1.run(DiskOrderedCursorImpl.java:358)
    "voldemort-niosocket-server2" daemon prio=10 tid=0x00007f951cc31800 nid=0x5738 waiting on condition [0x00007f93fc4c3000]
    java.lang.Thread.State: WAITING (parking)
         at sun.misc.Unsafe.park(Native Method)
         - parking to wait for <0x00000003021a79d8> (a java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync)
         at java.util.concurrent.locks.LockSupport.park(LockSupport.java:156)
         at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:811)
         at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireShared(AbstractQueuedSynchronizer.java:941)
         at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireShared(AbstractQueuedSynchronizer.java:1261)
         at java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.lock(ReentrantReadWriteLock.java:594)
         at com.sleepycat.je.latch.SharedLatch.acquireShared(SharedLatch.java:149)
         at com.sleepycat.je.tree.IN.latchShared(IN.java:484)
         at com.sleepycat.je.tree.Tree.getRootINInternal(Tree.java:2060)
         at com.sleepycat.je.tree.Tree.getRootIN(Tree.java:2036)
         at com.sleepycat.je.tree.Tree.search(Tree.java:1215)
         at com.sleepycat.je.dbi.CursorImpl.searchAndPosition(CursorImpl.java:2069)
         at com.sleepycat.je.Cursor.searchInternal(Cursor.java:2666)
         at com.sleepycat.je.Cursor.searchAllowPhantoms(Cursor.java:2576)
         at com.sleepycat.je.Cursor.searchNoDups(Cursor.java:2430)
         at com.sleepycat.je.Cursor.search(Cursor.java:2397)
         - locked <0x000000029823d400> (a com.sleepycat.je.Cursor)
         at com.sleepycat.je.Database.get(Database.java:1042)
         at voldemort.store.bdb.BdbStorageEngine.get(BdbStorageEngine.java:233)
         at voldemort.store.bdb.BdbStorageEngine.get(BdbStorageEngine.java:73)
         at voldemort.server.protocol.admin.FetchEntriesStreamRequestHandler.handleRequest(FetchEntriesStreamRequestHandler.java:69)
         at voldemort.server.niosocket.AsyncRequestHandler.handleStreamRequestInternal(AsyncRequestHandler.java:305)
         at voldemort.server.niosocket.AsyncRequestHandler.handleStreamRequest(AsyncRequestHandler.java:240)
         at voldemort.server.niosocket.AsyncRequestHandler.write(AsyncRequestHandler.java:203)
         at voldemort.utils.SelectorManagerWorker.run(SelectorManagerWorker.java:100)
         at voldemort.utils.SelectorManager.run(SelectorManager.java:194)
         at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
         at java.lang.Thread.run(Thread.java:662)
    The live traffic threads have the following stack
    "voldemort-niosocket-server50" daemon prio=10 tid=0x00007f951cbf6000 nid=0x572e waiting on condition [0x00007f93fc8c7000]
    java.lang.Thread.State: WAITING (parking)
         at sun.misc.Unsafe.park(Native Method)
         - parking to wait for <0x00000003021a79d8> (a java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync)
         at java.util.concurrent.locks.LockSupport.park(LockSupport.java:156)
         at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:811)
         at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireShared(AbstractQueuedSynchronizer.java:941)
         at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireShared(AbstractQueuedSynchronizer.java:1261)
         at java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.lock(ReentrantReadWriteLock.java:594)
         at com.sleepycat.je.latch.SharedLatch.acquireShared(SharedLatch.java:149)
         at com.sleepycat.je.tree.IN.latchShared(IN.java:484)
         at com.sleepycat.je.tree.Tree.getRootINInternal(Tree.java:2060)
         at com.sleepycat.je.tree.Tree.getRootIN(Tree.java:2036)
         at com.sleepycat.je.tree.Tree.search(Tree.java:1215)
         at com.sleepycat.je.dbi.CursorImpl.searchAndPosition(CursorImpl.java:2069)
         at com.sleepycat.je.Cursor.searchInternal(Cursor.java:2666)
         at com.sleepycat.je.Cursor.searchAllowPhantoms(Cursor.java:2576)
         at com.sleepycat.je.Cursor.searchNoDups(Cursor.java:2430)
         at com.sleepycat.je.Cursor.search(Cursor.java:2397)
         - locked <0x00000002997ab480> (a com.sleepycat.je.Cursor)
         at com.sleepycat.je.Database.get(Database.java:1042)
         at voldemort.store.bdb.BdbStorageEngine.get(BdbStorageEngine.java:233)
         at voldemort.store.bdb.BdbStorageEngine.get(BdbStorageEngine.java:73)
         at voldemort.store.rebalancing.RedirectingStore.get(RedirectingStore.java:136)
         at voldemort.store.rebalancing.RedirectingStore.get(RedirectingStore.java:60)
         at voldemort.store.invalidmetadata.InvalidMetadataCheckingStore.get(InvalidMetadataCheckingStore.java:105)
         at voldemort.store.invalidmetadata.InvalidMetadataCheckingStore.get(InvalidMetadataCheckingStore.java:41)
         at voldemort.store.DelegatingStore.get(DelegatingStore.java:60)
         at voldemort.store.stats.StatTrackingStore.get(StatTrackingStore.java:66)
         at voldemort.store.stats.StatTrackingStore.get(StatTrackingStore.java:39)
         at voldemort.server.protocol.vold.VoldemortNativeRequestHandler.handleGet(VoldemortNativeRequestHandler.java:311)
         at voldemort.server.protocol.vold.VoldemortNativeRequestHandler.handleRequest(VoldemortNativeRequestHandler.java:63)
         at voldemort.server.niosocket.AsyncRequestHandler.read(AsyncRequestHandler.java:130)
         at voldemort.utils.SelectorManagerWorker.run(SelectorManagerWorker.java:98)
         at voldemort.utils.SelectorManager.run(SelectorManager.java:194)
         at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
         at java.lang.Thread.run(Thread.java:662)
    All the other JE background threads - Cleaner, INCompressor, Checkpointer are all timed_waiting.
    may be I am misconfiguring or misusing the cursor? This is very reproducible in with my dataset. So will be glad to do more incremental debugging to figure out what's going on.
    Thanks
    Vinoth

    I understand. As one of the first users of DiskOrderedCursor you've uncovered an important restriction, and one we need to document better.
    Do you think the DO Producer should release the lock on the btree if blocked on the queue?Access to the Btree is blocked until the seeding phase is finished. This is very unlikely to change.
    There are two approaches I can recommend.
    1) Call DiskOrderedCursorConfig.setMaxSeedNodes(0). This will reduce initial performance because only the Btree root node will be used to seed the scan. However, if most of the scan does not use the in-cache Btree anyway, the performance impact will be small. This is the simplest approach.
    2) Call DiskOrderedCursorConfig.setMaxSeedNodes and DiskOrderedCursorConfig.setQueueSize to ensure that the queue will not be filled by the initial seeding process. For example, let's say you're using the default DatabaseConfig nodeMaxEntries, 128. This means there can be at most 128 keys queued for each Btree node that is used to seed the scan. Let's say the value you pass to setMaxSeedNodes is N. Then you can call setQueueSize((N + 1) * 128) to guarantee that the queue won't fill during seeding.
    Does that make sense?
    --mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • Loading javafx images concurrently

    Hello,
    So I been working on downloading images asynchronously and concurrently. I was planning on using the async javafx package. I made a big assumption before reading the details of how the async package works. I thought I was going to be able to easily run concurrent javafx code. But after reading http://blogs.oracle.com/clarkeman/entry/javafx_async_task I see that the async package was made for running java code.
    Is there a way to asynchronously and concurrently load javafx images, e.g. load on separate thread with the ability of main thread to access those images?
    thanks a lot!
    jose

    Hello,
    So I been working on downloading images asynchronously and concurrently. I was planning on using the async javafx package. I made a big assumption before reading the details of how the async package works. I thought I was going to be able to easily run concurrent javafx code. But after reading http://blogs.oracle.com/clarkeman/entry/javafx_async_task I see that the async package was made for running java code.
    Is there a way to asynchronously and concurrently load javafx images, e.g. load on separate thread with the ability of main thread to access those images?
    thanks a lot!
    jose

  • Maximum concurrent task load for UCCE/ICM multimedia/media routing

    Is there a place In ICM to configure the maximum concurrent load for multimedia (such as email, chat) for MR? I'm trying to find out such a paremters but I can only find out the 'interruptable' settings for the media routing domain/media class. I thought to use MR to route email/chat there should be somewhere to decide how many tasks one agent can receive and that should be configurable. Anyone can help, please?
    Thanks.

    Thanks, Geoff. I understand we can set the load in EIM/WIM since I've done a few EIM/WIM projects and implemented such settings. However in a case where another third party application instead of EIM/WIM needs to be integrated with ICM over MR interface, do you mean we will need to implement that load settings on the third party application side? I'm trying to understand how it works here. I think that ICM router has to have the knowledge about the agent's current load and the number of load it's allowed for the channel to do an accurate routing. If that's not defined in ICM then it might be passed by over MR interface? I don't see how the third party chat/email application can manage that settings all by itself without ICM's involvement.
    So if the third party application  is passing the MAX load setting to ICM then is there a default value ICM can use if that value failed to be passed from outside?
    Thanks.

  • Sql Loader and a batch id

    Hello
    I have a loading table which as a primary key. We insert into load_ctl table which has a load_ctl_id and then sql load a csv file into a table called load_table.
    Now when we have to process the load_table we work with the primary key from the control table load_ctl_id to know which load to process. How can I get the load_ctl_id into my load_table. The csv file does not contain it.

    What full version of Oracle?
    How do you currently generate the control table load_ctl_id?
    Do you have to be concerned with concurrent load jobs?
    What tool are you using to perform the load (sqlldr)?
    If you already have a way of gerating the load_ctl_id and placing it into the control table and you do not need to worry about concurrent jobs you could use a before insert trigger to insert the same maximum load_ctl_id (assuming sequence or date stamp) into the load table with each row insert. Or leave the column null during the load and then immediately after the load update each column.
    If you have to worry about concurrent load processes where each would be a different batch number then how you currently create the load_ctl_id value is more important to the solution since you have to make sure two concurrently running sessions would in fact grap two different batch ids.
    HTH -- Mark D Powell --

  • Can CRM Load to Two BW Systems?

    Veterans,
    Can an existing CRM Source system viably concurrently load into multiple BW (NW04s) Target productive systems?
    If it can, what precautions are needed to ensure viability of delta loads?
    I've reviewed landscape Note 775568 and it's caveats and viability proposals but it only refers to mainly ERP source systems.
    Have also reviewed CRM FAQ Note 692195.
    Need to be more certain of the CRM source system side data and associated delta ETL.
    Scenario:
    One CRM source system1
    CRM system1 has already 2 delta loads into BW1 target for the same DatasourceX.
    Same CRM system1 also needs those same delta load values to ETL into the proposed other BW2 target productive landscape for the same DatasourceX but maybe on a different day so it receives its own delta load1.
    BW1 target should also safely be able to receive delta load3 on its next delta.
    Am considering of the more complex transactional CRM datasources.
    Appreciate your help and input.
    Regards,
    Lee

    Hi Andrea,
    Thanks and appreciated for your reply.
    I know that multiple BW systems can be loaded from a single R/3 source instance with some conditions - it is in landscape Note 775568.
    But my specific question relates to CRM source systems loading into multiple BW systems. CRM is a satellite system.
    CRM data flows into ERP and BW systems via middleware so that's the reason for my posting.
    Can someone else like Dennis Scoville who has answered previous CRM BW postings maybe also contribute?
    All input and contributions are welcome and appreciated.
    Regards,
    Lee

  • Need suggestions on loading 5000+ files using sql loader

    Hi Guys,
    I'm writing a shell script to load more than 5000 files using sql loader.
    My intention is to load the files in parallel. When I checked the maximum number of sessions in v$parameter, it is around 700.
    Before starting the data load, programmatically I am getting the number of current sessions and maximum number of sessions and keeping free 200 sessions without using them (max. no. of sessions minus 200 ) and utilizing the remaining ~300 sessions to load the files in parallel.
    Also I am using a "wait" option to make the shell to wait until the 300 concurrent sql loader process to complete and moving further.
    Is there any way to make it more efficient? Also is it possible to reduce the wait time without hard coding the seconds (For Example: If any of those 300 sessions becomes free, assign the next file to the job queue and so on..)
    Please share your thoughts on this.
    Thanks.

    Manohar wrote:
    I'm writing a shell script to load more than 5000 files using sql loader.
    My intention is to load the files in parallel. When I checked the maximum number of sessions in v$parameter, it is around 700. Concurrent load you mean? Parallel processing implies take a workload, breaking that up into smaller workloads, and doing that in parallel. This is what the Parallel Query feature does in Oracle.
    SQL*Loader does not do that for you. It uses a single session to load a single file. To make it run in parallel, requires manually starting multiple loader sessions and perform concurrent loads.
    Have a look at Parallel Data Loading Models in the Oracle® Database Utilities guide. It goes into detail on how to perform concurrent loads. But you need to parallelise that workload yourself (as explained in the manual).
    Before starting the data load, programmatically I am getting the number of current sessions and maximum number of sessions and keeping free 200 sessions without using them (max. no. of sessions minus 200 ) and utilizing the remaining ~300 sessions to load the files in parallel.
    Also I am using a "wait" option to make the shell to wait until the 300 concurrent sql loader process to complete and moving further.
    Is there any way to make it more efficient? Also is it possible to reduce the wait time without hard coding the seconds (For Example: If any of those 300 sessions becomes free, assign the next file to the job queue and so on..)Consider doing it the way that Parallel Query does (as I've mentioned above). Take the workload (all files). Break the workload up into smaller sub-workloads (e.g. 50 files to be loaded by a process). Start a 100 processes in parallel and provide each one with a sub-workload to do (100 processes each loading 50 odd files).
    This is a lot easier to manage than starting for example a 5000 load processes and then trying some kind of delay method to ensure that not all hit the database at the same time.
    I'm loading about 100+ files (3+ million rows) every 60 seconds 24x7 using SQL*Loader. Oracle is quite scalable and SQL*Loader quite capable.

  • How to load the big raster data in georaster?

    Hi anyone,
    I have 1024G raster data ,I just ask how can I load the data, let it's performance so that it will not be affected by the large data?

    you can choose one of many third-party ETL tools for GeoRaster to load the data. They all support TIFF, multi-band and big sizes. for example:
    you can use commercial products Safe FME Oracle Edition, PCI GeoRaster ETL Tools, ERDAS enterprise loader, Manifold Enterprise edition, etc. (for a complete list, visit http://www.oracle.com/technology/products/spatial/spatial_partners_isv.htm http://www.oracle.com/technology/products/spatial/htdocs/spatial_partners_downloads.html)
    there are also free products or open source supporting GeoRaster. PCI provides a FREE loader and viewer for GeoRaster (http://www.pcigeomatics.com/index.php?option=com_content&view=article&id=77&Itemid=4#GeoRaster). GDAL is fully integrated with GeoRaster. It can load and export tens of raster data formats for GeoRaster. (http://www.gdal.org/frmt_georaster.html http://www.geosofti.com/ http://www.oracle.com/technology/products/spatial/htdocs/spatial_opensource.html)
    Due to the large volume of the raster data set you have, please always consider concurrent loading and avoid loading from remote network (copy data to the server and then load).
    hope this helps,
    jeffrey

  • Loading data from SAP ECC to DS---Error

    Hi all,
    I am using Abap dataflow for loading data from a table in SAP Application to oracle database. I am getting an error during execution as follows.
    I am using direct download method for transfer of data.need your valuable inputs.

    Hi phaneendranadh kandula,
    Direct Download Method: It will directly transfer the data from SAP Application Server to Client Download Directory.
    Note: It is not recommended because we cannot schedule job and not recommended for large amount of data.
    Data Transfer Methods.
    Data Services with SAP Direct Download Data Transfer Method
    Not Recommended
    We cannot Schedule
    Not recommended for Large Amount of Data
    Data Services with SAP Shared Directory Data Transfer Method
    Recommended
    Secure Method
    Can Handel Large amount of data
    can be executed in Background method
    Data Services with SAP FTP Data Transfer Method
    Recommended
    Secure Method
    Applicable in Multiple OS Environment
    Data Services with SAP Custom Transfer Method
    Recommended
    Highly Secure

  • 0CCA_C09 - DSO

    Hello,
    0CO_OM_CCA_9 is the delta enabled cost center datasource. Per SAP dataflow this loads data into the DSO 0CCA_C09. By default, the key fields in the standard SAP delivered CCA DSO are: controlling area, CO Document Number, Line Item of CO Document, Currency Type, Key Figure Type, Value Type for Reporting and Fiscal year variant.
    We won't be using CO Document Number and Line Item of CO Document per our requirement. In this case, which other fields should be included as key fields.
    Any helpp will be greatly appreciated.

    1. The Infopackage into PSA will be scheduled with a Init Delta initially.
    You mean manually Init with data transfer will be done.
    2. The DTP for this will be scheduled as a FULL load.
    Okay.
    3. The Infopackage into PSA will be scheduled with a Delta.
    Okay.
    4. Should the DTP for this be FULL or DELTA or it does not matter?
    (Note: Prior PSA requests are not deleted)
    Yes it has to be DELTA and also Prior Requests have to be deleted from PSA.
    <b>
    Do either of below</b>
    Start off with a Delta DTP itself as it doesnot matter ( No Init Reqd for DTP )
    or
    Full DTP -
    Delete Requests from PSA loaded by Full DTP
    Delta DTP
    For the first time Delta DTP is acting like Init with data transfer and so it loads all full requests from PSA or for that matter Init Delta Requests if done after Full DTP>

  • Thread waits in huge JSP EL pages

    I am trying to understand the behaviour of the weblogic JSP runtime when a large JSP page with lots of JSP EL expressions is requested concurrently by a number of users
    Tomcat caches the evaluated EL expressions and there is a bug in the ExpressionEvaluator which can cause a lot of threads to wait on a synchronized method. we are trying to do the same test on weblogic and we found this in the thread dumps st highly concurrent loads (100+ concurrent requests)
    Our env:
    WL 9.2 on Fedora Core 5 x86/BEA JRockit(R) R26.0.0-189_CR269406-59389-1.5.0_04-20060322-1126-linux-ia32
    1.5Ghz Pentium M with 1.5GB of RAM
    "[ACTIVE] ExecuteThread: '10' for queue: 'weblogic.kernel.Default (self-tuning)'" id=44 idx=0x48 tid=18453 prio=5 alive, in native, native_blocked, daemon
    at jrockit/vm/Allocator.nativeGetNewTLA(II)I(Native Method)
    at jrockit/vm/Allocator.getNewTLAAndAlloc(IIIZ)Ljava/lang/Object;(Unknown Source)[inlined]
    at jrockit/vm/Allocator.getMoreMemoryAndAlloc(IIIIZ)Ljava/lang/Object;(Unknown Source)[optimized]
    at javelin/jsp/el/ExpressionEvaluatorImpl.parseEL(Ljava/lang/String;Ljava/lang/Class;Ljavax/servlet/jsp/el/FunctionMapper;)Ljavelin/jsp/el/ELNode$Expression;(ExpressionEvaluatorImpl.java:140)[optimized]
    at javelin/jsp/el/ExpressionEvaluatorImpl.parseExpression(Ljava/lang/String;Ljava/lang/Class;Ljavax/servlet/jsp/el/FunctionMapper;)Ljavax/servlet/jsp/el/Expression;(ExpressionEvaluatorImpl.java:132)[inlined]
    at javelin/jsp/el/ExpressionEvaluatorImpl.evaluate(Ljava/lang/String;Ljava/lang/Class;Ljavax/servlet/jsp/el/VariableResolver;Ljavax/servlet/jsp/el/FunctionMapper;)Ljava/lang/Object;(ExpressionEvaluatorImpl.java:123)[inlined]
    at javelin/jsp/el/ExpressionEvaluatorImpl.evaluate(Ljava/lang/String;Ljava/lang/Class;Ljavax/servlet/jsp/JspContext;Ljavax/servlet/jsp/el/FunctionMapper;)Ljava/lang/Object;(ExpressionEvaluatorImpl.java:95)[optimized]
    at jsp_servlet/__test._jspService(Ljavax/servlet/http/HttpServletRequest;Ljavax/servlet/http/HttpServletResponse;)V(__test.java:1553)[optimized]
    I looked at this posting:
    http://forums.bea.com/bea/message.jspa?messageID=600017709&tstart=0
    with 9.2, is the behaviour as mentioned in CR261427 patch ? is weblogic now caching the evaluated expressions and is it now thread-safe
    we noticed a dramatic slowdown in the response times (from 1 sec at low loads to 6-8 seconds at 200+ users)..of c ourse with smaller JSP pages (with less EL expressions) the slowdown is not that dramatic and weblogic scales well...
    Any pointers appreciated,
    Ravi

    java.lang.reflect.Array is NOT a class which represents an array.
    It is a class that provides several static methods for using on arrays.
    The type of your attribute should be Object[] - an array of Objects.
    That will be compatible with an array of any sort of object (but not with an int[] for instance)
    <%@ attribute name="list" required="true" type="java.lang.Object[]" %>

  • Duplicate entries in referenced table (SQLInline=false)

    Hi,
    I'm testing with the following example XSD from Oracle:
    declare
    doc varchar2(8000) := '
    <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xdb="http://xmlns.oracle.com/xdb" elementFormDefault="unqualified" attributeFormDefault="unqualified" version="1.0" xdb:schemaURL="http://xmluser.de.oracle.com/xsd/deptemp.xsd">
         <xsd:complexType name="Department" xdb:SQLType="DEPT_T">
              <xsd:sequence>
                   <xsd:element name="Deptno" type="xsd:decimal" xdb:SQLName="DEPTNO"/>
                   <xsd:element name="Deptname" type="xsd:string" xdb:SQLName="DEPTNAME"/>
                   <xsd:element name="Employees" type="Employee" maxOccurs="unbounded" xdb:SQLName="EMPLOYEES" xdb:SQLInline="false" xdb:defaultTable="EMPLOYEES_TABLE"/>
              </xsd:sequence>
         </xsd:complexType>
         <xsd:complexType name="Employee" xdb:SQLType="EMP_T">
              <xsd:sequence>
                   <xsd:element name="Name" type="xsd:string" xdb:SQLName="NAME"/>
                   <xsd:element name="Age" type="xsd:decimal" xdb:SQLName="AGE"/>
                   <xsd:element name="Addr" type="Address" xdb:SQLName="ADDRESS"/>
              </xsd:sequence>
         </xsd:complexType>
         <xsd:complexType name="Address" xdb:SQLType="ADDR_T">
              <xsd:sequence>
                   <xsd:element name="Street" type="xsd:string" xdb:SQLName="STREET"/>
                   <xsd:element name="City" type="xsd:string" xdb:SQLName="CITY"/>
              </xsd:sequence>
         </xsd:complexType>
         <xsd:element name="emptable" type="Employee" xdb:defaultTable="EMP_TAB"/>
         <xsd:element name="depttable" type="Department" xdb:defaultTable="DEPT_TAB"/>
    </xsd:schema>
    begin
    -- dbms_xmlschema.deleteSchema('http://xmluser.de.oracle.com/xsd/deptemp.xsd', dbms_xmlschema.DELETE_CASCADE_FORCE);
    dbms_xmlschema.registerSchema('http://xmluser.de.oracle.com/xsd/deptemp.xsd', doc, TRUE, TRUE, FALSE, TRUE);
    end;
    Whenever I insert a line into the DEPT_TAB created by the above example, I get two entries for any contained Employees element in the EMPLOYEES_TABLE.
    E.g. the EMPLOYEES_TABLE is empty:
    SQL> select * from EMPLOYEES_TABLE;
    No rows selected.
    Then I insert the following:
    SQL> insert into DEPT_TAB VALUES (sys.XMLType('
    <depttable xmlns:xdb="http://xmlns.oracle.com/xdb" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://xmluser.de.oracle.com/xsd/deptemp.xsd">
         <Deptno>1</Deptno>
         <Deptname>Sales Consulting</Deptname>
         <Employees>
              <Name>ABC</Name>
              <Age>11</Age>
              <Addr>
                   <Street>Street1</Street>
                   <City>City1</City>
              </Addr>
         </Employees>
         <Employees>
              <Name>DEF</Name>
              <Age>22</Age>
              <Addr>
                   <Street>Street2</Street>
                   <City>City2</City>
              </Addr>
         </Employees>
    </depttable>
    After this I find the following in the EMPLOYEES_TABLE:
    SQL> select * from EMPLOYEES_TABLE;
    SYS_NC_ROWINFO$
    <Employees>
    <Name>ABC</Name>
    <Age>11</Age>
    <Addr>
    <Street>Street1</Str
    <Employees>
    <Name>DEF</Name>
    <Age>22</Age>
    <Addr>
    <Street>Street2</Str
    SYS_NC_ROWINFO$
    <Employees>
    <Name>ABC</Name>
    <Age>11</Age>
    <Addr>
    <Street>Street1</Str
    <Employees>
    <Name>DEF</Name>
    <Age>22</Age>
    <Addr>
    SYS_NC_ROWINFO$
    <Street>Street2</Str
    SQL> select count(*) from EMPLOYEES_TABLE;
    COUNT(*)
    4
    What's wrong here?
    Thanks for your help,
    Andreas

    Physical design is different depending on the DB, so let's start there, e.g. the Dim tables have unique indices built on the DIM IDs, but in Oracle - the fact table (Non-transactional and no Line Item/High cardinality dims) would just have bitmap indices, so not having a unique index on the fact table is not an issue in Oracle.
    What DB and version?
    Do any of the Dims have more than 16 charateristics in them?
    Are there concurrent loads to the InfoCube from different data sources?
    Is Number Range buffering used for any of the dimensions?
    Key figure(s) identical values, or are they different?

  • Re: Error when executing the Job.

    Hi,
    I am trying to Migrate data from SAP to MySQL. When I try to execute the JOB, it gives me the following error:
    1. 3964     1660     DBS-070401     6/29/2010 4:37:23 PM     |Dataflow DF_Sales_Headers|Loader Query_sales_headers
    2. 3964     1660     DBS-070401     6/29/2010 4:37:23 PM     ODBC data source <MyODBC> error message for operation   <SQLExecute>: <[MySQL][ODBC 3.51
    3. 3964     1660     DBS-070401     6/29/2010 4:37:23 PM     Driver][mysqld-5.0.34-enterprise-nt]Duplicate entry '0010000000' for key 1>.
    4. 3964     1660     RUN-051005     6/29/2010 4:37:23 PM     |Dataflow DF_Sales_Headers|Loader Query_sales_header     Execution of <Regular Load Operations> for target <sales_headers> failed. Possible causes: (1) Error in the SQL syntax; (2) Database connection is broken; (3) Database related errors such as transaction log is full, etc.; (4) The user defined in the datastore has insufficient privileges to execute the SQL. If the error is for preload or postload operation, or if it is for egular load operation and load triggers are defined, please check the SQL. Otherwise, for (3) and (4), please contact your local DBA.
    [Note: But when I validate the DataFlow before Executing, it does not show any error msgs.]
    This is pretty simple Batch Job, with one source Table and a Target table. I have used Query for mapping the columns.
    I am Pretty much new to BODI, so please bare with me, if the question seems silly.
    Any kind of help is appreciated. Thank You!!

    Hi all,
    Sorry for any inconvenience. I found out that, my Job has already executed and also populated the data. But, somehow, i missed that and was trying to re-execute it.
    But then again, I am trying to execute another job and has another peculiar problem.[This time, Have checked everything before posting].
    Following is the Error Message:
    5412     6600     R3C-151001     6/29/2010 6:13:35 PM     |Dataflow DF_Sales_Item Error calling R/3 to get table data: <RFC Error: Key: RFC_ERROR_SYSTEM_FAILURE Status: Error in ASSIGN assignment in program SAPLSDTX >.
    I am not sure why, as i have checked all the connections and i dont see any problems.
    Any kind of help is appreciated.
    Thank You!!
    Edited by: Ragini_sri on Jun 30, 2010 5:35 PM

  • Google links to Ad Sites and unrequested audio plays in background

    1) When I click on links from Google searches it redirects to Ad websites...
    2) When I have Internet windows open suddenly audio (that sounds like it is coming from other unopened websites) will start playing.

    If it's an ad site on Google, it would probably be blocked the "Adblock Plus" with the right subscription filters, or a filter you make.
    See http://kb.mozillazine.org/Blocking_bad_sites_and_annoyances
    :the links for extensions are near the bottom of the page
    That will probably stop your problem there are other things you can try:
    "Stop Autoplay" extension (I have it disabled, think it would not show the first frame)
    <br>media.autoplay.enabled &nbsp; user set &nbsp; boolean &nbsp; False (have that, shows up in about:config but not in about:support)
    nothing for stopping autoplay in these
    :http://kb.mozillazine.org/Flash
    :http://kb.mozillazine.org/Blocking_bad_sites_and_annoyances
    see [https://addons.mozilla.org/firefox/addon/load-tabs-progressively/ Load Tabs Progressively], Load tabs one by one.&nbsp; Limit the number of concurrent loading tabs and/or unread loaded tabs. (too restrictive for me)
    I would certainly take a look at, the second link requires Stylish extension.
    * Stylish :: Add-ons for Firefox<br>https://addons.mozilla.org/en-US/firefox/addon/stylish/
    * Tab Color Underscoring active/read/unread (Fx3.6 ) - Themes and Skins for Browser - userstyles.org<br>http://userstyles.org/styles/24728/tab-color-underscoring-active-read-unread-fx3-6
    <br><small>Please mark "Solved" one answer that will best help others with a similar problem -- hope this was it.</small>

  • How to run update query statement to update a table cell

    I have a job within that has dataflow which loads data to target after teh dataflow is done loading, i would like to call to run the following: in the workflow context do i need to add a script object , plus also the below query is it the proper way to run an update statement.
    sql('Target_DS','update tbl_job_status set endtime=sysdate() where endtime is null');
    Thank you very much for the helpful info
    kind regards.

    Arun,
    Is this the right way instead of doing enable recovery.
    Manaully taking care of recoverable workflow logic, by updating a table_job_status(starttime and end_time columns)?
    the only advantage of going this way rather than using recover as a unit is, if i use recover as a unit.
    then if any problem occurs in any dataflow, then all the dataflow one more time gets executed.
    instead i want only the data flow with the error occur has to run. for delta's.
    Kind regards.
    Edited by: cplusplus1 on Feb 13, 2012 4:21 PM

Maybe you are looking for