Modify DB in chunk

HI All ,
I have large internal table (like 1000000 records ) and i want to modify DB table from it ,
what i want is to do the modify in chunk i.e. to do commit after 1000 recorded each time .
what is the best way to do that ?
Regards
Chris

Hi
I have some additional questions ,
1. to do the modify in chunk it is good approach ?
MODIFY is INSERT+UPDATE, that means if a record in internal table doesn't exist it'll be inserted: that can be or can't be a risk: I don't know your goal.
I prefer to use UPDATE if I needs to update existing record only, so if perpahs a wrong record is loaded in the internal table it won't be inserted, and so I can avoid to do a select to check if a record exists or doesn't exists.
But if you doesn't need to care about it and so u need to insert and update a record: MODIFY is better.
2. i use modify from table in my program do i need to do read before update and just to modify the changes records
This is the concept below: if you need to change the existing records only, u make sure the record uploaded in the internal tables are in dictionary table too: how to do it depends on your program, i.e. how u've uploaded them, else u should check it before using MODIFY
3. the DO statements is not risky (like go to infinite loop ).
Every time a do cycle is used it needs to use a condition exit: in my sample I've used a counter... else a infinite loop is sure.
Max

Similar Messages

  • How to modify an EXTERNAL_TABLE and MAPPING using OMBplus

    I need to implement a QA--> Production migration technique using OMB+ scripting. We are using source safe for keeping the MDL files.
    As we have insufficient resources both environments are in the same database as different schemas. Configuration is as below;
    Source Schema: DWPROD_TST (test) and DWPROD (production)
    Target Schema: DWEXTRACT_TST (test) and DWEXTRACT (production)
    Source Flat File Module: DWEXTRACT_TST_INPUT (test) and DWEXTRACT_INPUT (production)
    Target Flat File Module: DWEXTRACT_TST_OUTPUT (test) and DWEXTRACT_OUTPUT (production)
    All schemas have their own locations and modules defined. Once a mapping is create in TST environment developers should check in the individual
    mdl files for the object they used for development. From is point the strategy I have follow is like below as I haven't seen a similar scenario before.
    I rename the mdl extensiton .zip. I unzip the file. I take the mdx file extracted and make a text based search/replace for all the test environment modules, locations... etc
    replacing them with production environment names. Then I run the OMB+ script below in order to import and deploy this object. This technique works for table and flat file object
    with slightly different OMB+ scripts. But I can not make it work for external tables and mappings and the reason that I can guess is the Configuration information of the Data File used
    as source for external table and target for the mapping I have developed. I can import the MDX successfully but cant deploy it. When i rigt click the mapping or the external file in OWB GUI
    and click on CONFIGURATION I can see that:
    For the MAPPING : Flat File Operator/(OperatorName)/Target Data File Location is still pointing to TEST environment even tough I have replaced all the TEST environment information with PRODUCTION.
    For the EXTERNAL_TABLE : Data File/(OperatorName)/Data File Location is still pointing to TEST environment even tough I have replaced all the TEST environment information with PRODUCTION.
    Is there any way to modify these LOCATION information for a MAPPING and an EXTERNAL TABLE OMB+. I read though all OWB+ scripting Reference but it sucks or I am so dumb.
    Or if you have an alternative solution for my problem I will be more then happy to read it.
    OMB+ Script for EXTERNAL_TABLE
    OMBCONNECT DWPROD/DWPROD@cakir:1521:orcl USE REPOSITORY 'OWBDB_SYS'
    OMBIMPORT MDL_FILE 'C:\\tfs\\stage\\externaltables\\XTRCT_XTRNL_INPUT_FILE\\XTRCT_XTRNL_INPUT_FILE.mdx' USE UPDATE_MODE MATCH_BY NAMES OUTPUT LOG TO 'C:\\tfs\\stage\\externaltables\\XTRCT_XTRNL_INPUT_FILE\\XTRCT_XTRNL_INPUT_FILE.log'
    OMBCC 'MY_PROJECT'
    OMBCONNECT CONTROL_CENTER
    OMBCOMMIT
    OMBALTER LOCATION 'XTRCT_DWEXTRACT_LOC' SET PROPERTIES (PASSWORD) VALUES ('PASSWORD')
    OMBALTER ORACLE_MODULE 'XTRCT_DWEXTRACT' ADD REFERENCE LOCATION 'XTRCT_DWEXTRACT_LOC' SET AS DEFAULT
    OMBALTER ORACLE_MODULE 'XTRCT_DWEXTRACT' SET PROPERTIES (DB_LOCATION) VALUES ('XTRCT_DWEXTRACT_LOC')
    OMBCOMMIT
    OMBREGISTER LOCATION 'XTRCT_DWEXTRACT_LOC'
    OMBCOMMIT
    OMBREGISTER LOCATION 'DWEXTRACT_INPUT'
    OMBCOMMIT
    OMBCC 'XTRCT_DWEXTRACT'
    OMBALTER EXTERNAL_TABLE 'XTRCT_XTRNL_INPUT_FILE' SET REFERENCE DEFAULT_LOCATION 'DWEXTRACT_INPUT'
    OMBCOMMIT
    OMBCREATE TRANSIENT DEPLOYMENT_ACTION_PLAN 'DEPLOY_PLAN' ADD ACTION 'TABLE_DEPLOY' SET PROPERTIES (OPERATION) VALUES ('REPLACE') SET REFERENCE EXTERNAL_TABLE 'XTRCT_XTRNL_INPUT_FILE'
    OMBDEPLOY DEPLOYMENT_ACTION_PLAN 'DEPLOY_PLAN'
    OMBDROP DEPLOYMENT_ACTION_PLAN 'DEPLOY_PLAN'
    OMBCOMMIT
    OMBDISCONNECT
    OMB+ Script for the mapping
    OMBCONNECT DWPROD/DWPROD@cakir:1521:orcl USE REPOSITORY 'OWBDB_SYS'
    OMBIMPORT MDL_FILE 'C:\\tfs\\stage\\mappings\\XTRCT_PROTOTYPE_XTRCTFILE_000\\XTRCT_PROTOTYPE_XTRCTFILE_000.mdx' USE UPDATE_MODE MATCH_BY NAMES OUTPUT LOG TO 'C:\\tfs\\stage\\mappings\\XTRCT_PROTOTYPE_XTRCTFILE_000\\XTRCT_PROTOTYPE_XTRCTFILE_000.log'
    OMBCC 'MY_PROJECT'
    OMBCONNECT CONTROL_CENTER
    OMBCOMMIT
    OMBALTER LOCATION 'XTRCT_DWEXTRACT_LOC' SET PROPERTIES (PASSWORD) VALUES ('PASSWORD')
    OMBALTER ORACLE_MODULE 'XTRCT_DWEXTRACT' ADD REFERENCE LOCATION 'XTRCT_DWEXTRACT_LOC' SET AS DEFAULT
    OMBALTER ORACLE_MODULE 'XTRCT_DWEXTRACT' SET PROPERTIES (DB_LOCATION) VALUES ('XTRCT_DWEXTRACT_LOC')
    OMBCOMMIT
    OMBALTER LOCATION 'XTRCT_DWPROD_LOC' SET PROPERTIES (PASSWORD) VALUES ('PASSWORD')
    OMBALTER ORACLE_MODULE 'XTRCT_DWPROD' ADD REFERENCE LOCATION 'XTRCT_DWPROD_LOC' SET AS DEFAULT
    OMBALTER ORACLE_MODULE 'XTRCT_DWPROD' SET PROPERTIES (DB_LOCATION) VALUES ('XTRCT_DWPROD_LOC')
    OMBCOMMIT
    OMBREGISTER LOCATION 'XTRCT_DWEXTRACT_LOC'
    OMBCOMMIT
    OMBREGISTER LOCATION 'XTRCT_DWPROD_LOC'
    OMBCOMMIT
    OMBREGISTER LOCATION 'DWEXTRACT_INPUT'
    OMBCOMMIT
    OMBREGISTER LOCATION 'DWEXTRACT_OUTPUT'
    OMBCOMMIT
    OMBCC 'XTRCT_DWEXTRACT'
    OMBCREATE TRANSIENT DEPLOYMENT_ACTION_PLAN 'DEPLOY_PLAN' ADD ACTION 'MAPPING_DEPLOY' SET PROPERTIES (OPERATION) VALUES ('CREATE') SET REFERENCE MAPPING 'XTRCT_PROTOTYPE_XTRCTFILE_000'
    OMBDEPLOY DEPLOYMENT_ACTION_PLAN 'DEPLOY_PLAN'
    OMBDROP DEPLOYMENT_ACTION_PLAN 'DEPLOY_PLAN'
    OMBCOMMIT
    OMBDISCONNECT
    Kind Regards,
    Emrah
    Edited by: cakire82 on Jul 29, 2009 12:57 PM

    If it helps, this is a chunk from a script I have which turned off the parallel loading hint on all targets in a given mapping which was required after we discovered that ANSI cross joins and parallel query do NOT work very well in Oracle 10.
    You can see that I OMBRETREIVE all table operators and then use a foreach loop to cycle through them.
    log_msg LOG  "Altering: $mapName"
    set tablist [OMBRETRIEVE MAPPING '$mapName' GET TABLE OPERATORS ]
    foreach tgt_tble $tablist {
        if [catch { set retstr [ OMBALTER MAPPING '$mapName' MODIFY OPERATOR    '$tgt_tble' SET PROPERTIES (LOADING_HINT) VALUES ('NOPARALLEL ("$mapName")')] } errmsg] {
            log_msg ERROR "Unable to set hint for table $tgt_tble of mapping $mapName "
            log_msg ERROR "$errmsg"
        } else {
            set print [ exec_omb OMBCREATE TRANSIENT DEPLOYMENT_ACTION_PLAN 'DEPLOY_PLAN' ADD ACTION 'MAPPING_DEPLOY' SET PROPERTIES (OPERATION) VALUES ('CREATE') SET REFERENCE MAPPING '$mapName' ]
            if [omb_error $print] {
                 log_msg ERROR "Unable to create Deployment plan for '$mapName'"
            } else {
                  set print [ exec_omb OMBDEPLOY DEPLOYMENT_ACTION_PLAN 'DEPLOY_PLAN' ]
                  if [omb_error $print] {
                      log_msg ERROR "Error on execute of Deployment plan for '$mapName'"
            OMBDROP DEPLOYMENT_ACTION_PLAN 'DEPLOY_PLAN'
            OMBCOMMIT
    }Ignore the exec_omb function as this is just a wrapper I built for OMB+ commands.
    As to passing parameters, look into argc and argv:
    if { $argc > 0 } {
         set i 1
         foreach arg $argv {
              puts "argument $i is $arg"
              incr i
      } else {
         puts "no command line argument passed"
      }You can also write interactive scripts if you prefer too. From earlier in the script above which turned off the parallel hint:
    puts ""
    puts -nonewline "Which mapping do you want to set to NOPARALLEL? "
    set mapName [gets stdin]I've even done loops to allow the user to perform the operation on multiple mappings, and to use pattern matching to determine which mappings to alter:
    set doLoop "1"
    while { [string match "1" $doLoop] } {
        puts -nonewline "What mapping to reconfigure (use name of generated plsql package)? "
        flush stdout
        set mapName [ string toupper [gets stdin] ]
        puts -nonewline "What value do you want to set as the maximum allowed errors? "
        flush stdout
        set eValue [gets stdin]
        set mapList [OMBLIST MAPPINGS '.*$mapName.*']
        if { [llength $mapList] == 0 } {
           log_msg ERROR "No mappings matching search string $mapName"
        } else {
            foreach mName $mapList {
                 puts -nonewline "Update mapping $mName (y/n)? "
                 flush stdout
                 set doThisUpdate [string toupper [gets stdin]]
                if { [string match "Y" $doThisUpdate ] } {
                    if [catch { set retstr [ OMBALTER MAPPING '$mName' SET PROPERTIES (MAXIMUM_NUMBER_OF_ERRORS) VALUES ( '$eValue')] } errmsg] {
                        log_msg ERROR "Unable to modify max errors on Mapping $mName"
                        log_msg ERROR "$errmsg"
                    } else {
                        set print [ exec_omb OMBCREATE TRANSIENT DEPLOYMENT_ACTION_PLAN 'DEPLOY_PLAN' ADD ACTION 'MAPPING_DEPLOY' SET PROPERTIES (OPERATION) VALUES ('CREATE') SET REFERENCE MAPPING '$mName' ]
                        if [omb_error $print] {
                            log_msg ERROR "Unable to create Deployment plan for '$mName'"
                        } else {
                            set print [ exec_omb OMBDEPLOY DEPLOYMENT_ACTION_PLAN 'DEPLOY_PLAN' ]
                            if [omb_error $print] {
                                log_msg ERROR "Error on execute of Deployment plan for '$mName'"
                        exec_omb OMBDROP DEPLOYMENT_ACTION_PLAN 'DEPLOY_PLAN'
                        exec_omb OMBCOMMIT
                } else {
                    log_msg LOG "Skipping update to mapping $mName"
        puts -nonewline "Do Another? (y/n) "
        flush stdout
        set doAnother [gets stdin]
        if { [string match "Y" [string toupper $doAnother] ] } {
            log_msg LOG "User requests more updates"
        } else {
            set doLoop "0"
    }Anyway - there is lots that you can do to parameterize script or make them interactive. Just spend a bit of time working through a couple of TCL tutorials and you'll be well on your way.
    +(Note: script chunks have been edited for clarity of the main point I wanted to illustrate and may contain missmatched braces or other syntax problems.)+

  • Chunked Transfer Encoding - Doesn't work due to implementation bug?

    Hello All,
    I'm tackling the infamous "transfer-encoding: chunked" issue on trying to send POST data from the J2ME emulators and devices greater than 2048 bytes (or whatever the internal buffer size of a specific device might be). The nginx server on the other end always sends back a *411 - No content length specified* because it doesn't seem to recognize chunked transfer encoding. (despite the fact nginx is HTTP 1.1 compliant meaning it should be able to recognize it without rejection)
    One thing I've noticed is that compared to a correctly chunked HTTP body which looks something like this:
    *2*
    ab
    *4*
    abcd
    A
    *0123456789*
    *3*
    foo
    *0*
    the HTTP bodies coming out of the MIDlet though Network Monitor in the emulator just look like this:
    ababcd0123456789foo
    It seems the chunked HTTP body does not have the chunk size indices that should be there for proper formatting.
    I did my debugging on the Sun Java Network Monitor and maybe that tool "hides" the actual chunking indices
    or the chunking takes place at a point beyond the emulator's snapshot of the data. Does anyone know if the Sun Java Network Monitor indeed shows the entire body of an HTTP POST in it's true state? The size of one packet I'm looking at is 2016 which is 32 bytes off 2048 - the supposed limit where data starts getting chunked. Could this "missing" 32 bytes from the log be the chunking indexes I am looking for? If the Network Monitor indeed shows the packet body in it's entirety then it seems like people are having issues sending chunked transfers because the format is wrong.
    Anyway assuming J2ME's chunked transfer encoding is working correctly then how come nginx and many other supposedly HTTP 1.1 servers cannot understand the incoming format? Is Java's implementation of chunked transfer encoding somehow deviant from the set standard? I read a post at the following address
    http://forum.java.sun.com/thread.jspa?forumID=76&threadID=454773
    in which a user could not get chunked posts recognized by Apache until he changed the format of his HTTP body (read the last post on that thread for details on this).
    In case no one can assist on the above how would I go about trying to intercept the HTTP data stream right after it leaves the emulator or a real device. Is this something that can only be done server side? If so how can I guarantee that the data I am receiving on the server side was not modified by another entity along the way?
    Any help on this would be much appreciated. Thanks.
    -Bob

    We're still tackling the same issue but we've made a few discoveries:
    * Nginx does not support chunked transfer encoding despite it's HTTP 1.1 status.
    * The chunked transfer encoding format coming out of J2ME is correct. During some server side logging sessions we found that J2ME puts in the chunking indices and the finalizing 0 and carriage return to signify the end of the stream.
    * The missing 32 bytes not shown in the Sun Java Network Monitor are the chunking indices - they are just hidden from you at that level of display. Since the J2ME emulator sends it's packets in 2048 bytes chunks when viewing the monitor you will just see the contents itself (2016 byte chunks). The other 32 bytes are the headers i.e.
    7e0[carriage return]
    which is a 32 bytes sequence (4 8 byte characters) with the last character being the carriage return. 7e0 in decimal is 2016 - the size of the displayed packet in the network monitor.
    Anyone have any suggestions for a reliable server end receiver for chunked transfer encoding content? (preferably one which works well with Ruby on Rails)

  • JVM Crash OutOfMemory Chunk::new Out Of swap Space

    Hi !
    We are currently experiencing a problem on both JRE-1.5.0_15 and JRE-1.6.0_7. We have a JEE application running with EJB2 on JBoss 423 that we have migrated to EJB3 on JBoss5. The server is Windows 2003 SP2 server running in 32 bits mode (with /3GB option in boot.ini) is set up with 4GB memory and 4GB paging.
    Before migration, this app runs perfectly on JBoss 423 compiled on JDK5 with runtime JRE5 and JRE6 (not at the same time though) up to 50 simulatneous users on a single JBoss instance.
    After migration to EJB3, we are experiencing this error as of 5 simultaneous users :
    # java.lang.OutOfMemoryError: requested 2292728 bytes for Chunk::new. Out of swap space?
    Current thread (0x54092800):  JavaThread "CompilerThread0" daemon [_thread_in_native, id=4092, stack(0x54280000,0x542d0000)]
    Stack: [0x54280000,0x542d0000]
    [error occurred during error reporting (printing stack bounds), id 0x80000001]
    Current CompileTask:
    C2:4692      com.mycompany.myproduct ...These are our JVM options for memory:
    jvm_args: -Dprogram.name=pdm.bat -Xms1024m -Xmx1024m -XX:PermSize=256m -XX:MaxPermSize=256m -XX:+DisableExplicitGCWe don't have this problem if we force JVM options so that it only runs in interpreted mode (-Xint). We also have created a file with the list of methods we want to keep out of the JIT. With this option, the server runs longer but it eventualy crashes.
    So we reckon this is a JIT issue on the App Server. I have posted a thread on JBoss forum but they recommended me to open one here.
    So I have modified the JVM options to increase the Perm size :
    jvm_args: -Dprogram.name=pdm.bat -Xms1024m -Xmx1024m -XX:PermSize=512m -XX:MaxPermSize=512m -XX:+Dis
    ableExplicitGCIn that case, JBoss crashes at startup time :
    # An unexpected error has been detected by Java Runtime Environment:
    # java.lang.OutOfMemoryError: requested 35608 bytes for Chunk::new. Out of swap space?
    #  Internal Error (allocation.cpp:218), pid=5756, tid=8176
    #  Error: Chunk::new
    # Java VM: Java HotSpot(TM) Server VM (10.0-b23 mixed mode windows-x86)
    # If you would like to submit a bug report, please visit:
    #   http://java.sun.com/webapps/bugreport/crash.jsp
    # The crash happened outside the Java Virtual Machine in native code.
    # See problematic frame for where to report the bug.
    ---------------  T H R E A D  ---------------
    Current thread (0x64142800):  JavaThread "CompilerThread0" daemon [_thread_in_native, id=8176, stack(0x64330000,0x64380000)]
    Stack: [0x64330000,0x64380000]
    [error occurred during error reporting (printing stack bounds), id 0x80000001]
    Current CompileTask:
    C2:1420  !   org.jboss.ejb3.Ejb3Deployment.deployElement(Ljava/io/InputStream;Lorg/jboss/ejb3/Ejb3Ha
    ndlerFactory;Ljavax/naming/InitialContext;)V (66 bytes)
    Heap
    PSYoungGen      total 217472K, used 176224K [0x539d0000, 0x639d0000, 0x639d0000)
      eden space 170432K, 97% used [0x539d0000,0x5dbec148,0x5e040000)
      from space 47040K, 21% used [0x60be0000,0x615dc210,0x639d0000)
      to   space 44672K, 0% used [0x5e040000,0x5e040000,0x60be0000)
    PSOldGen        total 786432K, used 200494K [0x239d0000, 0x539d0000, 0x539d0000)
      object space 786432K, 25% used [0x239d0000,0x2fd9b9b0,0x539d0000)
    PSPermGen       total 524288K, used 39978K [0x039d0000, 0x239d0000, 0x239d0000)
      object space 524288K, 7% used [0x039d0000,0x060da870,0x239d0000)So I have reduced heap size
    jvm_args: -Dprogram.name=pdm.bat -Xms768m -Xmx768m -XX:PermSize=512m -XX:MaxPermSize=512m -XX:+Dis
    ableExplicitGCand then it works much longer (can reach 20 users ...) before, again, eventually crashing.
    I have reported the error to http://java.sun.com/webapps/bugreport/crash.jsp but I haven't heard from it as yet ...
    Any recommendation to fix this issue ? Many thanks in advance for your help !

    I recently experienced a similar issue on HP-UX 11iv2 with WebLogic 9.1 and HP JDK 1.5, and we were also using the CMS (Concurrent Mark Sweep) GC. I was able to resolve the issue by forcing the size of the Survivor Area using the -XX:SurvivorRatio=n parameter. By setting this to a value lower than the default (I think 25) it forces the Survivor Area to be a larger percentage of the New Generation, and hence larger in size. I found that I needed at least 8MB, but we have a pretty large application, so you might get by with significantly less.
    As it stands, your New Gen will be 25% (default, I think) of you Xms, and I think the default SurvivorRatio is 25. The tricky part is the calculation using the SurvivorRatio. Since there are two Survivor Areas, if -XX:SurvivorRatio=25, then each Survivor Area will be 1/27 of New Gen. Anyhow, it's confusing, but you can easily verify these numbers online, and do the math yourself, but hopefully you get the point.
    Obviously, I can't guarantee anything, but you might want to give it a try since my symptoms were extremely similar.

  • Modify contents in a table

    Hi all,
    I have a table 'tab' with some records and this table 'tab' has two keys. I want to read a particular record with the two keys(I have these two key values). This record has a field unit of measure which is always 'L' and I want to overwrite Unit of measure value to 'Kg' in that field and modify the table with that row entry.
    How can I do this. Please help..... Waiting....

    Hi Raju,
    Let this be internal table it_test
    key1 | key2 | name | unit |amount
    1 | 2 | asd  | P | 100.00
    3 | 4 | ash  | L | 200.00
    5 | 6 | adsf | P |300.00
    Lets assume that ( name = L ) can occur mulitple times
    for different records in the internal table.
    Then add this chunk of code to modify the contents of the field name form <b>L</b> to <b>Kg</b>.
    LOOP AT it_test WHERE name = 'L'.
      it_test-name = 'Kg'.
      MODIFY it_test INDEX sy-tabix TRANSPORTING name.
    ENDLOOP.
    Regards,
    AS

  • Modify items in datagrid

    I have a datagrid that is bound to an array collection. Is
    there a simpler way to modify the properties in the datagrid and
    have the datagrid change to reflect the changes.. This is my
    current code to alter on property and get the datagrid display to
    update itself...
    var item:Object = DGCollection.getItemAt (0);
    item.label= "changed text";
    DGCollection.itemUpdated(item);
    This code will change the name of a row and update the
    datagrid to display the changed text. I always thought that you
    could use a one liner to do this same thing. Any ideas? or is this
    how you do it

    This is what I have:
    [Bindable]
    private var DGCollection:ArrayCollection = new
    ArrayCollection ();
    in a callback I receive data from the server and then i use
    this line of code to populate the datagrid.
    DGCollection.addItem (myObject);
    and then here is my datagrid xml:
    <mx:DataGrid dataProvider="{DGCollection}"
    id="MyDataGrid">
    <mx:columns>
    <mx:DataGridColumn itemRenderer="MyRenderer1"
    dataField="label" headerText="headerText"/>
    <mx:DataGridColumn itemRenderer="MyRenderer2"
    dataField="label" headerText="headerText"/>
    </mx:columns>
    </mx:DataGrid>
    The data loads just fine but I thought since the datagrid was
    bound to the collection that the ui what automatically update when
    you change anything on the collection.
    I've also tried this chunk of code to try to update the
    collection:
    DGCollection[myIndex].label = "test";
    which doesn't work
    is their a piece of code that I can use that updates the
    collection automatically?

  • Can varargs in SessionBeans be deployed in WL9.2? modifier transient

    Hello,
    We're trying to deploy an ear onto an AdminServer in development mode. We want to declare some local stateless session beans with vararg input parameters.
    e.g.
    public String[] getValidIDsOfType(String validOnDate, String... filterTypes);
    It compiles fine and we can create an ear file.
    Try to activate a deployment, however, and the error we get is An error occurred during activation of changes, please see the log for details......Compiler failed executable.exec...
    modifier transient not allowed here public transient java.lang.String[] getValidIDsOfType(java.lang.String arg0, java.lang.String[] arg1)
    That doesn't make much sense until you go into the domain's
    servers\AdminServer\cache\EJBCompilerCache\randomName\<packagePathToEJB>
    directory and look at the <EJBName>Local_<blah>Intf.java and <EJBName>Local<blah>_ELOImpl.java
    This shows that WL9.2's AppC translates the original EJB to:
    public transient java.lang.String[] getValidIDsOfType(java.lang.String arg0, java.lang.String[] arg1)
    in the Local_<blah>_Intf.java extends WLEnterpriseBean interface.  Similarly for the ELOImpl class.
    I suppose the workaround for now is to remove the varargs parameter and for each client call to pass in an array, even where there's only one object.
    Has anyone else come across this, or have any BEA guys there got a fix for it? Is there any way to remove the transient modifier that WL9.2 inserts?
    Thanks and Best Regards,
    ConorD

    For anyone with the same problem, there's no full fix at present...I used a workaround instead and changed the input parameter from varargs to an ArrayList, which worked fine.
    Then I contacted BEA support listing it as a bug and gave them a chunk of code and and ant build file to replicate it. BEA support were able to replicate it, and very quickly assigned a team to it, but informed me that it can't be fixed at present, and that I should use my workaround.
    The problem is actually with the spec; the bit positions used to signify that a parameter is transient is shared with those that signify that a parameter is a vararg. It's implemented giving the same effect on both Sun's JDK and JRockit. I don't know about IBMs JDK. Unfortunately BEA have no plans to change this and it occurs in WLS 10 too. I suppose the only way to fix it would be to give a "best guess" on whether the user wanted transient or varargs. For example, if it's an area where varargs are allowed but transient not, then assume the user wanted varargs and mark them so (like with an EJB), otherwise assume the user wanted transient. But I was told by BEA that they have no plan to add those smarts into their AppC, and maybe that's not even possible, given that the spec has overloaded those bit positions.
    Here's a link to the bug filed with Sun that BEA sent on to me:
    http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6516895
    This includes a link to the the relevant part from the spec:
    http://java.sun.com/docs/books/vmspec/2nd-edition/ClassFileFormat-Java5.pdf
    I hope that this helps anyone with the same problem.
    Best Regards,
    ConorD

  • Chunk expression .color returning error

    Greetings, this one shoudl be so simple it is driving me
    crazy. I'm trying to test and set the color of a word in a text
    field, containing ten comma separated words, as follows in Dir 8.5
    message window:
    put member("test").text
    -- "one, two, three, four, five, six, seven, eight, nine,
    ten"
    I can select a single word and set a color for it via the
    color picker.
    However, I can't seem to figure out how to return or set the
    color of a word-chunk via lingo:
    put member("test").word[3].color
    produces a Script Error: Wrong number of parameters.
    I've also tried this using other chunks like item, line and
    char.
    In the docs under "color" this example is given so it appears
    that it should work:
    member("myQuotes").char[4..7].color = rgb(200, 150, 75)
    When I modify the field name and try this exact statememt, I
    still get the same error when setting or returning the .color
    property.
    I figher it has to be something simple. Tried searching this
    forum back several years but no luck. Can anyone help? Thanks

    If you have MX or lower I believe there are two problems with
    the field member (as opposed to a text member which should work
    with the posted syntax).
    Problem 1 is that the fields don't have a color property on
    an individual chunk.
    You can however use the older foreColor.
    Problem 2 is that the fields require verbose syntax when
    referring to chunks.
    set the foreColor of word 2 of member "test" to 128

  • Modify logging parameter of a lob

    I have a table as defined below (table a).
    I want to change the logging of the lob (DATA) from nologging to logging.
    I use the alter command:
    alter table a modify lob (DATA) store as (nocache logging);
    This doesn't work. I've tried numerous variations of this command but i can't get it working.
    How do I do.
    CREATE TABLE a
    b NUMBER(10) NOT NULL,
    c BLOB NOT NULL
    TABLESPACE ss
    PCTUSED 40
    PCTFREE 10
    INITRANS 1
    MAXTRANS 255
    STORAGE (
    INITIAL 64K
    MINEXTENTS 1
    MAXEXTENTS 2147483645
    PCTINCREASE 0
    FREELISTS 1
    FREELIST GROUPS 1
    BUFFER_POOL DEFAULT
    LOGGING
    NOCOMPRESS
    LOB (DATA) STORE AS
    ( TABLESPACE BLOBSPACE
    ENABLE STORAGE IN ROW
    CHUNK 8192
    PCTVERSION 0
    NOCACHE
    NOLOGGING
    STORAGE (
    INITIAL 800M
    MINEXTENTS 1
    MAXEXTENTS 2147483645
    PCTINCREASE 0
    BUFFER_POOL DEFAULT
    CACHE
    NOPARALLEL
    MONITORING;
    /Robert

    SQL>  create table a (b number, c blob)
      lob (c) store as (enable storage in row chunk 8192 nocache nologging)
    Table created.
    SQL>  alter table a modify lob (c)  (nocache logging)
    Table altered.
    SQL>  drop table a
    Table dropped.

  • How to modify parameters for a LOB?

    SQL1:
    ALTER TABLE "TEST"."T03"
    MODIFY LOB (COL2)
    STORE AS SECUREFILE
    (TABLESPACE "TBS_1" DISABLE STORAGE IN ROW CHUNK 5000 PCTVERSION 22 CACHE READS FILESYSTEM_LIKE_LOGGING STORAGE
    ( INITIAL 1024 NEXT 1111 PCTINCREASE 12))
    SQL2:
    ALTER TABLE "TEST"."T03"
    move LOB (COL2)
    STORE AS SECUREFILE
    (TABLESPACE "TBS_1" DISABLE STORAGE IN ROW CHUNK 5000 PCTVERSION 22 CACHE READS FILESYSTEM_LIKE_LOGGING STORAGE
    ( INITIAL 1024 NEXT 1111 PCTINCREASE 12))
    My intention is to modify parameters of LOB(COL2). Looks SQL1 has SQL error: ORA-00906: missing left parenthesis while SQL2 can be executed correctly. But the result of SQL2 turns out to be like:
    LOB ("COL2") STORE AS SECUREFILE (
      TABLESPACE "TBS_1" DISABLE STORAGE IN ROW CHUNK 8192
      CACHE READS NOLOGGING  NOCOMPRESS  KEEP_DUPLICATES
      STORAGE(INITIAL 106496 NEXT 8192 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0
      BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)) "
    which is not the same I expect. Can anybody please help?

    check query
    >
    SELECT GR.NAME GROUPING_RULE_NAME,
    GR.DESCRIPTION GROUPING_RULE_DESC,
    GR.START_DATE,
    GR.END_DATE,
    ORD.NAME ORDERING_RULE_NAME,
    AL.MEANING TYPE,
    GC.DESCRIPTION COLUMN_NAME
    FROM RA_GROUPING_RULES GR,
    RA_LINE_ORDERING_RULES ORD,
    RA_GROUPING_TRX_TYPES GT,
    RA_GROUP_BYS GB,
    RA_GROUP_BY_COLUMNS GC,
    AR_LOOKUPS AL
    WHERE GR.ORDERING_RULE_ID = ORD.ORDERING_RULE_ID(+)
    AND GR.GROUPING_RULE_ID = GT.GROUPING_RULE_ID(+)
    AND GT.GROUPING_TRX_TYPE_ID = GB.GROUPING_TRX_TYPE_ID(+)
    AND GB.COLUMN_ID = GC.COLUMN_ID
    AND AL.LOOKUP_TYPE = 'GROUPING_TRX_TYPE'
    AND AL.LOOKUP_CODE = GT.CLASS
    AND GR.NAME =:GROUPING_RULE_NAME
    >
    in some ide like sql developer
    before execution try to init session and simulate user login by which you run concurrent
    because you have in sql table AR_LOOKUPS which depend on lang
    CREATE OR REPLACE VIEW AR_LOOKUPS
    WHERE LV.LANGUAGE = userenv('LANG')
       and LV.VIEW_APPLICATION_ID = 222
       and LV.SECURITY_GROUP_ID = 0;and plz post result of test

  • Images modified on *import*?!

    I upgraded to iPhoto '09 and then imported a group of GPS-tagged RAW photos to check out the Places feature (which is pretty cool). On the next Time Machine backup, I noticed a significant chunk of data was being backed up (12Gb). I was amazed to notice that my iPhoto library had swollen by nearly 5Gb (I have the "copy to iPhoto library" option turned off). Upon checking, I found that a "modified" version had been created for every photo I imported. In the past, I believe modified versions have only been created when I made... a modification. Since I only use iPhoto for cataloging, no modifications were made on this group (by me). What did iPhoto do that it needed to modify all 1300 images I imported? This could be a serious problem since I was planning to import about 60Gb worth of files for the Places feature. If it's going to create a modified version of each one, it's back to being "least useful"
    PS This is not a case of auto-rotate. 90% of the files were landscape but still had modified versions created.
    Message was edited by: Xian Rinpoche

    OK, that would certainly account for the additions. Are the previews typically high quality? I guess I'm used to "preview" meaning "lower resolution, lower quality image to fill the gap while RAW file is rendered." Seems that this is not the case with iPhoto? Is there a way to force a lower quality preview?
    And I believe I mentioned 1300 images above.
    Message was edited by: Xian Rinpoche

  • Possible to intercept and modify rtmp streams?

    If user A is streaming rtmp out to port X, is it possible to listen in on the originating port, get the data, modify it and send it out t port X? Since the rtmp data is sent by chunks so isn't it possible to modify the data and send it to its destination? This is not RTMPS so there's no security stuff so it is possible no? And if it is, what would be the best approach?

    As Paul said, "no" it is not possible.  However, there may be some outputs that you could use.  I didn't see any attachment to your post.  Look at the connector pane if there are any outputs that provide the data you are looking for and that you could use inside your own VI.
    Message Edited by JoeLabView on 03-12-2008 11:00 AM

  • Message: "The database structure has been modified" every time I log to SAP

    Hello,
    "The database structure has been modified. In order to resume this process, all open windows will be closed". Every time I log to one of my companies in SAP Business One this message appears.
    I haven't installed any new addons and made no changes in database structure (and any other user hasn't done any changes), but this message appears always when I log to company for the first time (when I try to log on another user or log to another company there is no message). Can anyone help me with this problem?
    Best regards
    Ela Świderska

    Hi Ela Świderska,
    You may check this thread first:
    UDFs disappeared
    Thanks,
    Gordon

  • Blends: Difference between open and closed paths modifying spine

    Okay, just had a discovery today about the blend tool.
    The difference between closed paths and open paths is critical when you want to modify a blend spine, such as make it curve. Between open paths, there is no spine shown when a blend is created. Ever notice this? Between closed paths, there is a spine, which can be modified, such as making it curved. Now, the trick is how to modify a spine with open paths when there is no spine shown. Found out today that starting with a closed path, segments can be deleted to make the path open, then the spine can be modified and the open path state maintained!
    This is a huge bug in Illustrator and makes no sense to me. Wanting Adobe to fix this in the next major release. Hope this helps anyone struggling with this issue.

    One thing you should be aware of concerning open ended splines:
    It makes an enormous difference as to how the blend behaves whether the endpoints have handles or not.
    Give them handles and you will find that you can alter the distribution of the blend objects just by adjusting the handles or by dragging on the spline itself.
    Give it a try.

  • How to modify a SQL query?

    Hi all,
    I am using Crystal Reports version 10. I have a number of reports that have been written by a software vendor wherby the name of the database they were connected to when the report was written is coded into the FROM command of the reports SQL query, eg "GCUK_2" in the of the SQL snippet below.
    SELECT "Clients"."NAME", "Quotes"."QUOTE_ID", "Quote_Items"."UPRICE", "Quote_Items"."QTY", "Quote_Items"."UOM", "Quote_Items"."QUSAGE_ID", "Report_Control"."QUSAGEID", "Quote_Items"."STANDARD", "Quote_Items"."SECT_FLAG", "Quote_Items"."DISPORDER", "Quotes"."DESCRIPT", "Report_Control"."SECT_NAME", "Quote_Items"."CNT", "Category_and_Type"."TYPEDESC", "Quote_Items"."DESCRIPT", "Report_Control"."DISP_SORT"
    FROM   ((("GCUK_2"."schedwin"."QTE_CTRL" "Report_Control" INNER JOIN "GCUK_2"."schedwin"."QUSAGE" "Quote_Items" ON "Report_Control"."QUSAGEID"="Quote_Items"."QUSAGE_ID") LEFT OUTER JOIN "GCUK_2"."schedwin"."QUOTES" "Quotes" ON
    I have tried setting the Datasource Location, but it doesn't change the query at all. I have read on another forum that you can generate another SQL query using the Database Expert, Current Connections, then right click the Add Command for the database you want to create a SQL command. Is this the only way to update the database names in the query?
    Thanks,
    Scott.

    Hi Sourashree,
    Thanks for that. All the reports were created by the vendor using tables as opposed to the command object. I would have thought that changing the datasource would automatically cause Crystal to rewrite the SQL query syntax, but this doesn't appear to be the case.
    Yes, I've noticed that modifying the record selection will change the sql query. The only way I can see to change the database name in the query is to change to the desired databsource and then remove and re-insert the tables, which will then update the query with the correct name. However, this seems to be a convoluted way of changing the db name in the query.

Maybe you are looking for