Too many ejb serialization in pstore directory

Hello guys, my weblogic(8.1) is creating too many files in the directory /pstore and is consuming too much of my file system, it has actually reached 25GB in three days. I did not find any thing that could give any clue about what is going on. I wonder if you guys could help me. I would appreciate very much.
Thanks for your help in advance.
Alex/

Guys... it has stopping without any intervention.
Edited by: user10455324 on Feb 22, 2010 5:30 PM

Similar Messages

  • Too many files/folders in a directory poor performance

    We have an XServe running 10.6.2 with a large Fibre Channel Raid from ACNC. One of the shares is used for archive and has a lot of files/folders in it and as such takes a while to display and search. Is there any way to speed this up other than the obvious of not keeping so many in one directory? I am getting resistance to breaking it up into subfolders though that obviously is the best solution. What is the level at which performance degrades? Any documentation or info on this? Thanks!

    While all file systems support large numbers of files in a directory, any directory system performance degrades as the number of files increases. The actual number may vary from file system to file system (e.g. UFS, HFS+, NTFS, EXT2/3/4... the list goes on) but it's always there.
    In pure technical terms, the performance starts to degrade on single-digit (typically 4 - 10) number of files since past that the directory index exceeds a single block on disk but the difference between reading one block and two isn't noticeable to the average human. In practical terms, on most direct-attached storage you'll see a performance drop at around 10,000 files. By 20,000 it's noticeable.
    These numbers can be far worse over a network connection since it has to be read by the server before it can be passed over the network and re-parsed by the client.
    There isn't much of an option other than to strip the directories down.

  • Too many EJBs??

    I am trying to deploy an application with 22 EJBs. (yes, I know that's a lot...mostly for development) When I try this, I continually get the error "An error occurred while redeploying the application. Nested exception Root Cause: Syntax error in source. Syntax error in source".
    When I install the application with only 21 EJBs, all seems ok. Even if I just add an "empty" EJB, I get this error. Is there a maximum allowable number of EJBs for an application? Is there a way to change this?
    As always, thanks in advance!!

    Sorry, as always, I forget this:
    Oracle9iAS release 2 running RedHat 8.0

  • Cannot create a calendar collection because there are too many already present in

    My OS X Server (Yosemite) error log is throwing this error:
    2014-10-22 00:03:07+0800 [-] [caldav-2]  [-] [twistedcaldav.storebridge.CalendarCollectionResource#error] Cannot create a calendar collection because there are too many already present in <twistedcaldav.directory.calendar.DirectoryCalendarHomeResource object at 0x10743a750>
    ...when I attempt to add more than 50 lists in the Reminders app. I had a similar issue in Mavericks. Is this "50" number available to modify in a config file somewhere?
    My plan is to manage GTD-type projects in my Reminders apps (on iOS and OS X), but this limit is keeping me from creating a list for EVERY project I have.

    Woohoo! I set the integer to "500" and I now have over 200 lists added, and syncing, to my Reminders app and related tools.
    Here's exactly what I did after reading the response from Linc Davis:
    Stop the Calendar service
    Create /Library/Server/Calendar and Contacts/Config/caldav-user.plist
    Edit contents of "caldav-user.plist", setting your desired integer value:
    <?xml version="1.0" encoding="UTF-8"?>
    <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
    <plist version="1.0">
         <dict>
           <key>MaxCollectionsPerHome</key>    
           <integer>500</integer>
         </dict>
    </plist>
       4. Start the Calendar service
    Enjoy your many more lists.
    Note that a file created in a place other than ~/ may have permissioning issues. To get around this, I made a copy of /Library/Server/Calendar and Contacts/caldavd-system.plist, then renamed and edited that file with nano in terminal.

  • BEA-012036 Too Many Files | Weblogic Start Up for EJB deployment

    Trying to deploy EAR with EJB module inside. I am seeing the below error intermittently while deploying and now starting the weblogic server .
    There is already an EAR deployed with all the Shared Libraries under APP-INF/lib. I deployed a new EAR with all the similar common shared libraries ( APP-INF/lib jar files ) packaged inside this new file.
    <Apr 28, 2009 7:45:18 PM EDT> <Error> <EJB> <BEA-012036> <Compiling generated EJB classes produced the following Java compiler error message:
    /prod/xxxx/prd/server/dev_xxxx/servers/mt_crd_180_15001/cache/EJBCompilerCache/f6yuykfqdn7i/com/test/xxx/ejb/rtmschema/CreditserviceSessionEjb_f0jmmq_Impl.java:17: error while writing com.test.xxx.ejb.rtmschema.CreditserviceSessionEjb_f0jmmq_Impl: /prod/xxxx/prd/server/dev_xxxx/servers/mt_crd_180_15001/cache/EJBCompilerCache/f6yuykfqdn7i/com/test/xxx/ejb/rtmschema/CreditserviceSessionEjb_f0jmmq_Impl.class (Too many open files)
    (source unavailable)
    1 error
    >
    <Apr 28, 2009 7:45:19 PM EDT> <Error> <Deployer> <BEA-149205> <Failed to initialize the application 'xxx-ear-ejb4' due to error weblogic.application.ModuleException: Exception preparing module: EJBModule(xxx-ejbschema-service.jar)
    Unable to deploy EJB: xxx-ejbschema-service.jar from xxx-ejbschema-service.jar:
    There are 1 nested errors:
    java.io.IOException: Compiler failed executable.exec:
    /prod/xxxx/prd/server/dev_xxxx/servers/mt_crd_180_15001/cache/EJBCompilerCache/f6yuykfqdn7i/com/test/xxx/ejb/rtmschema/CreditserviceSessionEjb_f0jmmq_Impl.java:17: error while writing com.test.xxx.ejb.rtmschema.CreditserviceSessionEjb_f0jmmq_Impl: /prod/xxxx/prd/server/dev_xxxx/servers/mt_crd_180_15001/cache/EJBCompilerCache/f6yuykfqdn7i/com/test/xxx/ejb/rtmschema/CreditserviceSessionEjb_f0jmmq_Impl.class (Too many open files)
    (source unavailable)
    1 error
    at weblogic.utils.compiler.CompilerInvoker.compileMaybeExit(CompilerInvoker.java:493)
    at weblogic.utils.compiler.CompilerInvoker.compile(CompilerInvoker.java:332)
    at weblogic.utils.compiler.CompilerInvoker.compile(CompilerInvoker.java:340)
    at weblogic.ejb.container.ejbc.EJBCompiler.doCompile(EJBCompiler.java:343)
    at weblogic.ejb.container.ejbc.EJBCompiler.compileEJB(EJBCompiler.java:533)
    at weblogic.ejb.container.ejbc.EJBCompiler.compileEJB(EJBCompiler.java:500)
    at weblogic.ejb.container.deployer.EJBDeployer.runEJBC(EJBDeployer.java:476)
    at weblogic.ejb.container.deployer.EJBDeployer.compileJar(EJBDeployer.java:798)
    at weblogic.ejb.container.deployer.EJBDeployer.compileIfNecessary(EJBDeployer.java:701)
    at weblogic.ejb.container.deployer.EJBDeployer.prepare(EJBDeployer.java:1234)
    at weblogic.ejb.container.deployer.EJBModule.prepare(EJBModule.java:372)
    at weblogic.application.internal.flow.ModuleListenerInvoker.prepare(ModuleListenerInvoker.java:93)
    at weblogic.application.internal.flow.DeploymentCallbackFlow$1.next(DeploymentCallbackFlow.java:360)
    at weblogic.application.utils.StateMachineDriver.nextState(StateMachineDriver.java:26)
    at weblogic.application.internal.flow.DeploymentCallbackFlow.prepare(DeploymentCallbackFlow.java:56)
    at weblogic.application.internal.flow.DeploymentCallbackFlow.prepare(DeploymentCallbackFlow.java:46)
    at weblogic.application.internal.BaseDeployment$1.next(BaseDeployment.java:615)
    at weblogic.application.utils.StateMachineDriver.nextState(StateMachineDriver.java:26)
    at weblogic.application.internal.BaseDeployment.prepare(BaseDeployment.java:191)
    at weblogic.application.internal.DeploymentStateChecker.prepare(DeploymentStateChecker.java:147)
    at weblogic.deploy.internal.targetserver.AppContainerInvoker.prepare(AppContainerInvoker.java:61)
    at weblogic.deploy.internal.targetserver.AppDeployment.prepare(AppDeployment.java:137)
    at weblogic.management.deploy.internal.DeploymentAdapter$1.doPrepare(DeploymentAdapter.java:39)
    at weblogic.management.deploy.internal.DeploymentAdapter.prepare(DeploymentAdapter.java:187)
    at weblogic.management.deploy.internal.AppTransition$1.transitionApp(AppTransition.java:21)
    at weblogic.management.deploy.internal.ConfiguredDeployments.transitionApps(ConfiguredDeployments.java:233)
    at weblogic.management.deploy.internal.ConfiguredDeployments.prepare(ConfiguredDeployments.java:165)
    at weblogic.management.deploy.internal.ConfiguredDeployments.deploy(ConfiguredDeployments.java:122)
    at weblogic.management.deploy.internal.DeploymentServerService.resume(DeploymentServerService.java:173)
    at weblogic.management.deploy.internal.DeploymentServerService.start(DeploymentServerService.java:89)
    at weblogic.t3.srvr.SubsystemRequest.run(SubsystemRequest.java:64)
    at weblogic.work.SelfTuningWorkManagerImpl$WorkAdapterImpl.run(SelfTuningWorkManagerImpl.java:464)
    at weblogic.work.ExecuteThread.execute(ExecuteThread.java:200)
    at weblogic.work.ExecuteThread.run(ExecuteThread.java:172)
    weblogic.application.ModuleException: Exception preparing module: EJBModule(xxx-ejbschema-service.jar)
    Unable to deploy EJB: xxx-ejbschema-service.jar from xxx-ejbschema-service.jar:
    There are 1 nested errors:
    java.io.IOException: Compiler failed executable.exec:
    /prod/xxxx/prd/server/dev_xxxx/servers/mt_crd_180_15001/cache/EJBCompilerCache/f6yuykfqdn7i/com/test/xxx/ejb/rtmschema/CreditserviceSessionEjb_f0jmmq_Impl.java:17: error while writing com.test.xxx.ejb.rtmschema.CreditserviceSessionEjb_f0jmmq_Impl: /prod/xxxx/prd/server/dev_xxxx/servers/mt_crd_180_15001/cache/EJBCompilerCache/f6yuykfqdn7i/com/test/xxx/ejb/rtmschema/CreditserviceSessionEjb_f0jmmq_Impl.class (Too many open files)
    (source unavailable)
    1 error
    at weblogic.utils.compiler.CompilerInvoker.compileMaybeExit(CompilerInvoker.java:493)
    at weblogic.utils.compiler.CompilerInvoker.compile(CompilerInvoker.java:332)
    at weblogic.utils.compiler.CompilerInvoker.compile(CompilerInvoker.java:340)
    at weblogic.ejb.container.ejbc.EJBCompiler.doCompile(EJBCompiler.java:343)
    at weblogic.ejb.container.ejbc.EJBCompiler.compileEJB(EJBCompiler.java:533)
    at weblogic.ejb.container.ejbc.EJBCompiler.compileEJB(EJBCompiler.java:500)
    at weblogic.ejb.container.deployer.EJBDeployer.runEJBC(EJBDeployer.java:476)
    at weblogic.ejb.container.deployer.EJBDeployer.compileJar(EJBDeployer.java:798)
    at weblogic.ejb.container.deployer.EJBDeployer.compileIfNecessary(EJBDeployer.java:701)
    at weblogic.ejb.container.deployer.EJBDeployer.prepare(EJBDeployer.java:1234)
    at weblogic.ejb.container.deployer.EJBModule.prepare(EJBModule.java:372)
    at weblogic.application.internal.flow.ModuleListenerInvoker.prepare(ModuleListenerInvoker.java:93)
    at weblogic.application.internal.flow.DeploymentCallbackFlow$1.next(DeploymentCallbackFlow.java:360)
    at weblogic.application.utils.StateMachineDriver.nextState(StateMachineDriver.java:26)
    at weblogic.application.internal.flow.DeploymentCallbackFlow.prepare(DeploymentCallbackFlow.java:56)
    at weblogic.application.internal.flow.DeploymentCallbackFlow.prepare(DeploymentCallbackFlow.java:46)
    at weblogic.application.internal.BaseDeployment$1.next(BaseDeployment.java:615)
    at weblogic.application.utils.StateMachineDriver.nextState(StateMachineDriver.java:26)
    at weblogic.application.internal.BaseDeployment.prepare(BaseDeployment.java:191)
    at weblogic.application.internal.DeploymentStateChecker.prepare(DeploymentStateChecker.java:147)
    at weblogic.deploy.internal.targetserver.AppContainerInvoker.prepare(AppContainerInvoker.java:61)
    at weblogic.deploy.internal.targetserver.AppDeployment.prepare(AppDeployment.java:137)
    at weblogic.management.deploy.internal.DeploymentAdapter$1.doPrepare(DeploymentAdapter.java:39)
    at weblogic.management.deploy.internal.DeploymentAdapter.prepare(DeploymentAdapter.java:187)
    at weblogic.management.deploy.internal.AppTransition$1.transitionApp(AppTransition.java:21)
    at weblogic.management.deploy.internal.ConfiguredDeployments.transitionApps(ConfiguredDeployments.java:233)
    at weblogic.management.deploy.internal.ConfiguredDeployments.prepare(ConfiguredDeployments.java:165)
    at weblogic.management.deploy.internal.ConfiguredDeployments.deploy(ConfiguredDeployments.java:122)
    at weblogic.management.deploy.internal.DeploymentServerService.resume(DeploymentServerService.java:173)
    at weblogic.management.deploy.internal.DeploymentServerService.start(DeploymentServerService.java:89)
    at weblogic.t3.srvr.SubsystemRequest.run(SubsystemRequest.java:64)
    at weblogic.work.SelfTuningWorkManagerImpl$WorkAdapterImpl.run(SelfTuningWorkManagerImpl.java:464)
    at weblogic.work.ExecuteThread.execute(ExecuteThread.java:200)
    at weblogic.work.ExecuteThread.run(ExecuteThread.java:172)
    at weblogic.ejb.container.deployer.EJBModule.prepare(EJBModule.java:399)
    at weblogic.application.internal.flow.ModuleListenerInvoker.prepare(ModuleListenerInvoker.java:93)
    at weblogic.application.internal.flow.DeploymentCallbackFlow$1.next(DeploymentCallbackFlow.java:360)
    at weblogic.application.utils.StateMachineDriver.nextState(StateMachineDriver.java:26)
    at weblogic.application.internal.flow.DeploymentCallbackFlow.prepare(DeploymentCallbackFlow.java:56)
    Truncated. see log file for complete stacktrace
    java.io.IOException: Compiler failed executable.exec:
    /prod/xxxx/prd/server/dev_xxxx/servers/mt_crd_180_15001/cache/EJBCompilerCache/f6yuykfqdn7i/com/test/xxx/ejb/rtmschema/CreditserviceSessionEjb_f0jmmq_Impl.java:17: error while writing com.test.xxx.ejb.rtmschema.CreditserviceSessionEjb_f0jmmq_Impl: /prod/xxxx/prd/server/dev_xxxx/servers/mt_crd_180_15001/cache/EJBCompilerCache/f6yuykfqdn7i/com/test/xxx/ejb/rtmschema/CreditserviceSessionEjb_f0jmmq_Impl.class (Too many open files)
    (source unavailable)
    1 error
    at weblogic.utils.compiler.CompilerInvoker.compileMaybeExit(CompilerInvoker.java:493)
    at weblogic.utils.compiler.CompilerInvoker.compile(CompilerInvoker.java:332)
    at weblogic.utils.compiler.CompilerInvoker.compile(CompilerInvoker.java:340)
    at weblogic.ejb.container.ejbc.EJBCompiler.doCompile(EJBCompiler.java:343)
    at weblogic.ejb.container.ejbc.EJBCompiler.compileEJB(EJBCompiler.java:533)
    Truncated. see log file for complete stacktrace
    Thanks
    Abhi

    Put only files which needs to be 'shared' ( e.g persistent store files for jms, jta transaction store etc) on the network drive .. put all other files including logs on the local disk.. see if that makes a difference..
    NAS is slower than other storage setups like SAN and DAS...

  • Open Directory logging? (Too many open files)

    I recently had a request from another admin to up the logging output of my OD servers. So I opened WGM, opened the OLCGlobalConfig section and changed the olcLogLevel to Filter. At which point my OD server stopped allowing edits and updates to the server. And started reporting "too many open files" in the system.log. So I guess I have 2 questions.
    1) What is the proper method for increasing the log level of an OD server? The other admin is looking for query data and queries per second data. So if there is some other way of getting that data I'm all for it.
    2) Is there any solution for the "too many open files" if I change the olcLogLevel in the future?
    Thanks,
    Derek

    Explore the odutil command.  There are 7 levels of logging available.  error is the default.  You may want to try notice to see if that gives you the information you want.
    sudo odutil set log notice

  • Too many red entried in st02

    Dear Experts,
    I have installed sap ecc6 ehp4reday sp1 with db2 9.1 fp9 on hp-ux 11.31 1 x Itanium 1.46Ghz with RAM 16 GB.
    the issue is there is too many REDs in st02 tune screen . Kindly suggest me any note matching with the given information or provide me solution .
    the st02 summery is as bellow
    Buffer Name                    Comment
    Profile Parameter             Value      Unit  Comment
    Program buffer                 PXA
    abap/buffersize               300000     kB    Size of program buffer
    abap/pxa                      shared           Program buffer mode
    CUA buffer                     CUA
    rsdb/cua/buffersize           3000       kB    Size of CUA buffer
    Screen buffer                  PRES
    zcsa/presentation_buffer_area 4400000    Byte  Size of screen buffer
    sap/bufdir_entries            2000             Max. number of buffered screens
    Generic key table buffer       TABL
    zcsa/table_buffer_area        30000000   Byte  Size of generic key table buffer
    zcsa/db_max_buftab            5000             Max. number of buffered objects
    Single record table buffer     TABLP
    rtbb/buffer_length            10000      kB    Size of single record table buffer
    rtbb/max_tables               500              Max. number of buffered tables
    Export/import buffer           EIBUF
    rsdb/obj/buffersize           4096       kB    Size of export/import buffer
    rsdb/obj/max_objects          2000             Max. number of objects in the buffer
    rsdb/obj/large_object_size    8192       Bytes Estimation for the size of the largest object
    rsdb/obj/mutex_n              0                Number of mutexes in Export/Import buffer
    OTR buffer                     OTR
    rsdb/otr/buffersize_kb        4096       kB    Size of OTR buffer
    rsdb/otr/max_objects          2000             Max. number of objects in the buffer
    rsdb/otr/mutex_n              0                Number of mutexes in OTR buffer
    Exp/Imp SHM buffer             ESM
    rsdb/esm/buffersize_kb        4096       kB    Size of exp/imp SHM buffer
    rsdb/esm/max_objects          2000             Max. number of objects in the buffer
    rsdb/esm/large_object_size    8192       Bytes Estimation for the size of the largest object
    rsdb/esm/mutex_n              0                Number of mutexes in Exp/Imp SHM buffer
    Table definition buffer        TTAB
    rsdb/ntab/entrycount          20000            Max. number of table definitions buffered
    The size of the TTAB is nearly 100 bytes * rsdb/ntab/entrycount
    Field description buffer       FTAB
    rsdb/ntab/ftabsize            30000      kB    Size of field description buffer
    rsdb/ntab/entrycount          20000            Max. number / 2 of table descriptions buf
    FTAB needs about 700 bytes per used entry
    Initial record buffer          IRBD
    rsdb/ntab/irbdsize            6000       kB    Size of initial record buffer
    rsdb/ntab/entrycount          20000            Max. number / 2 of initial records buffer
    IRBD needs about 300 bytes per used entry
    Short nametab (NTAB)           SNTAB
    rsdb/ntab/sntabsize           3000       kB    Size of short nametab
    rsdb/ntab/entrycount          20000            Max. number / 2 of entries buffered
    Calendar buffer                CALE
    zcsa/calendar_area            500000     Byte  Size of calendar buffer
    zcsa/calendar_ids             200              Max. number of directory entries
    Roll, extended and heap memory EXTM
    ztta/roll_area                3000000    Byte  Roll area per workprocess (total)
    ztta/roll_first               1          Byte  First amount of roll area used in a dialog WP
    ztta/short_area               3200000    Byte  Short area per workprocess
    rdisp/ROLL_SHM                16384      8 kB  Part of roll file in shared memory
    rdisp/PG_SHM                  8192       8 kB  Part of paging file in shared memory
    rdisp/PG_LOCAL                150        8 kB  Paging buffer per workprocess
    em/initial_size_MB            4092       MB    Initial size of extended memory
    em/blocksize_KB               4096       kB    Size of one extended memory block
    em/address_space_MB           4092       MB    Address space reserved for ext. mem. (NT only
    ztta/roll_extension           2000000000 Byte  Max. extended mem. per session (external mode
    abap/heap_area_dia            2000000000 Byte  Max. heap memory for dialog workprocesses
    abap/heap_area_nondia         2000000000 Byte  Max. heap memory for non-dialog workprocesses
    abap/heap_area_total          2000000000 Byte  Max. usable heap memory
    abap/heaplimit                40000000   Byte  Workprocess restart limit of heap memory
    abap/use_paging               0                Paging for flat tables used (1) or not (0)
    Statistic parameters
    rsdb/staton                   1                Statistic turned on (1) or off (0)
    rsdb/stattime                 0                Times for statistic turned on (1) or off (0)
    thanks
    sadiq

    Hello Sadiq,
    I agree with the previous post, I'm afraid there is no quick fix for this, especially since we cannot see all of these red entries.
    If your errors are purely in memory management on SAP side, you should consider posting in the CST "memory management" part of this forum
    Advise from a DB2 perspective is to make sure that all memory specific setting are correct as per note:
    899322    DB6: DB2 9.1 Standard Parameter Settings
    After setting these parmeters and carrying out further fine tuning yourself, you should consider scheduling an early watch session from SAP.
    Best of luck,
    Paul
    Edited by: Paul Murtagh on Apr 14, 2011 4:50 PM

  • Too many objects match the primary key oracle.jbo.Key

    Hi OAF Gurus,
    Currently we are implementing R12 Upgrade , for this we have deployed all the custom OAF Application related files on to the the respective JAVA_TOP folder.
    We have a custom municipal postal application which tracks the Postal Details.
    The page runs perfectly fine without any error in 11i instance, but the same is erroring out In R12.
    In R12 it shows an error as Too many objects match the primary key oracle.jbo.Key[112010 2014-10-01]
    here 112010 is nothing but the postal code id and 2014-10-01 is the Effective Start Date
    We have a custom table as xxad_postal_codes_f  (Date Track table)which contains the postal_code_id and effective_start_date (primary key is combination of postal_code_id and effective_start_date ).
    The Table already contains a row for postal_code_id = 112010  and Effective_Start_date = Sysdate.
    Now we want to update the entry for the same postal code with the Id being same as 112010  and  Effective_Start_date as 2014-10-01 through custom PostCodeChangePG
    at the time of save we are getting an error as Too many objects match the primary key oracle.jbo.Key[112010 2014-10-01]
    The table doesn't contain any of the data mentioned ([112010 2014-10-01]) at the time of insertion, hence there should not be any duplication of primary key but still we are getting the error.
    Please let us know how can we handle this..?
    Below is the code which is getting called on Click of Save button of PostCodeChangePG
    if (pageContext.getParameter("Apply") != null)
          PCodeCoWorkerBase coWorker = getCoWorker(pageContext, webBean);
              coWorker.processApply();
    Code in PCodeCoWorkerBase
        public void processApply()
          String postalCodeId = UIHelper.getRequiredParameter(pageContext, "postalCodeId");
          Date startDate = UIHelper.getRequiredDateParameter(pageContext , "EffectiveStartDate");
         Serializable[] postalCodeData = (Serializable[]) applicationModule.invokeMethod( "insertPostalCodeMajorChange", params, paramTypes );
          finalizeTransactionAndRedirect( postalCodeData );
    Code in Application Module
      public Serializable[] insertPostalCodeMajorChange ( String postalCodeId, Date date )
        PCodeAmWorker amWorker = new PCodeAmWorker(this);
        return amWorker.insertMajorChange( postalCodeId, DateHelper.convertClientToServerDate( getOADBTransaction(), date )
    Code in PCodeAmWorker
      public Serializable[] insertMajorChange ( String postalCodeId, Date date )
        // Get the view objects we need from the application module
        OAViewObject viewObject = (OAViewObject) applicationModule.getPCodesVO();
        PCodesVORowImpl currentRow = (PCodesVORowImpl) viewObject.getCurrentRow();
        currentRow.validate();
        currentRow.setEffectiveStartDate(date);
        currentRow.setComment1(currentRow.getNewComment());
    // Create a new row based on the current row
    PCodesVORowImpl newRow = (PCodesVORowImpl) viewObject.createAndInitRow(currentRow); //This is failing out and gives the error
    // Get the new effective start date as entered by the user
    Date effectiveStartDate = currentRow.getEffectiveStartDate();
        // Calculate the previous period's effective end date
        Date previousEffectiveEndDate = DateHelper.addDays(effectiveStartDate, -1);
        // Refresh the current row (the one changed by the UI) with the data it had at the beginning of the transaction
        currentRow.refresh(Row.REFRESH_UNDO_CHANGES);
        // The current row will now represent data for the the previous period set the effective end date for the previous period
        currentRow.setEffectiveEndDate(previousEffectiveEndDate);
        // Insert the newly created row that now represents the new period
        viewObject.insertRow(newRow);
        applicationModule.apply();
        return generateResult(newRow);
    PCodesVO() is based on PostalCodeEO
    below is the code from PostalCodeEOImpl
      public void create(AttributeList attributeList)
        // NOTE: This call will set attribute values if the entity object  is created with a call to vo.createAndInitRow(..)
        super.create(attributeList);
        if (getPostalCodeId() == null)
          setPostalCodeId(getOADBTransaction().getSequenceValue("XXAD_POSTAL_CODES_S"));
        if (getEffectiveStartDate() == null)
          setEffectiveStartDate(getOADBTransaction().getCurrentDBDate());
    After diagnosing the issue we found that the error is on the code of AMworker file while creating a new row PCodesVORowImpl newRow = (PCodesVORowImpl) viewObject.createAndInitRow(currentRow);
    we tried so many things such as clearing entity cache, VO cache, validating for duplicate primary key but still not able to resolved this.
    Please advice how to insert a new row on the PCodesVORowImpl without any exception.
    Thanks,
    Pallavi

    Hi ,
    One question here , if you are udating a existing record then why you are trying to create a new row
    PCodesVORowImpl newRow = (PCodesVORowImpl) viewObject.createAndInitRow(currentRow);
    Thanks
    Pratap

  • IPhone cannot sync address book, "too many flushes" in console

    Hello, ran into this recently and its driving me nuts. I've restored the phone twice, without restoring a backup, setting it up as a new phone each time.
    It'll sync everything else fine but not this. I have reset the sync history in isync, deleted my address book completely (I have a backup of it) yet iTunes still shows the old groups under the info tab. I've double checked that I am the owner on the permissions page of the mail/ical/itunes/etc.... folders in %users/Library. Repaired permissions. I'm running out of things to check.
    Whenever I try to sync this is the information that can be found in the Console. I've deleted preferences, i'm almost ready to completely blow itunes away and re-import/re-rate my itunes library from scratch!
    2/27/10 1:19:44 PM AddressBookSync[804] [111520] |Miscellaneous|Error| SyncServices assertion failure (_flushCount > 0) in [ISDClientState enableFlush], /SourceCache/SyncServices2/SyncServices2-578/SyncServices/ISDClientState.m:181 too many enableFlushes
    2/27/10 1:19:44 PM com.apple.syncservices.SyncServer[799] 2010-02-27 13:19:44.891 AddressBookSync[804:903] [111520] |Miscellaneous|Error| SyncServices assertion failure (_flushCount > 0) in [ISDClientState enableFlush], /SourceCache/SyncServices2/SyncServices2-578/SyncServices/ISDClientState.m:181 too many enableFlushes
    2/27/10 1:19:44 PM AddressBookSync[804] AddressBookSync (client id: com.apple.AddressBook) error: Exception running AddressBookSync: [ISDClientState enableFlush]: too many enableFlushes
    2/27/10 1:19:44 PM com.apple.syncservices.SyncServer[799] 2010-02-27 13:19:44.891 AddressBookSync[804:903] AddressBookSync (client id: com.apple.AddressBook) error: Exception running AddressBookSync: [ISDClientState enableFlush]: too many enableFlushes
    I also cannot sync anything from the Info tab, contacts, calendars, can't replace the data either. If I enable iCal syncing it just hangs on the sync and doesn't error out. it'll stay syncing contacts for over 12 hours so far with no change.
    Message was edited by: thirdgen89gta

    Now isn't this interesting. I decided on a whim to have console open when I reset the sync preferences. And this is what it reported. Definitely some permissions errors. I've already run
    2/27/10 3:26:06 PM SyncServer[769] [124bf0] |DataManager|Warning| Resetting Sync Data - Including Clients and Schemas=NO
    2/27/10 3:26:06 PM SyncServer[769] [124bf0] |Server|Warning| Admin database corrupted. Resetting data directory.
    2/27/10 3:26:06 PM SyncServer[769] [124bf0] |DataManager|Error| Unable to remove /Users/rr152510/Library/Application Support/SyncServices/Local/TFSM/com.apple.Bookmarks/data.syncdb, error was Error Domain=NSCocoaErrorDomain Code=513 UserInfo=0x1001a5320 "“data.syncdb” couldn’t be removed because you don’t have permission to access it."
    2/27/10 3:26:06 PM SyncServer[769] [124bf0] |DataManager|Error| Unable to remove /Users/rr152510/Library/Application Support/SyncServices/Local/TFSM/com.apple.Calendars/data.syncdb, error was Error Domain=NSCocoaErrorDomain Code=513 UserInfo=0x1001b1860 "“data.syncdb” couldn’t be removed because you don’t have permission to access it."
    2/27/10 3:26:06 PM SyncServer[769] [124bf0] |DataManager|Error| Unable to remove /Users/rr152510/Library/Application Support/SyncServices/Local/TFSM/com.apple.Contacts/data.syncdb, error was Error Domain=NSCocoaErrorDomain Code=513 UserInfo=0x1001a8ac0 "“data.syncdb” couldn’t be removed because you don’t have permission to access it."
    2/27/10 3:26:06 PM SyncServer[769] [124bf0] |DataManager|Error| Unable to remove /Users/rr152510/Library/Application Support/SyncServices/Local/TFSM/com.apple.dashboard.widgets/data.syncdb, error was Error Domain=NSCocoaErrorDomain Code=513 UserInfo=0x10019e0a0 "“data.syncdb” couldn’t be removed because you don’t have permission to access it."
    2/27/10 3:26:06 PM SyncServer[769] [124bf0] |DataManager|Error| Unable to remove /Users/rr152510/Library/Application Support/SyncServices/Local/TFSM/com.apple.dock.items/data.syncdb, error was Error Domain=NSCocoaErrorDomain Code=513 UserInfo=0x1001a8210 "“data.syncdb” couldn’t be removed because you don’t have permission to access it."
    2/27/10 3:26:06 PM SyncServer[769] [124bf0] |DataManager|Error| Unable to remove /Users/rr152510/Library/Application Support/SyncServices/Local/TFSM/com.apple.Keychain/data.syncdb, error was Error Domain=NSCocoaErrorDomain Code=513 UserInfo=0x1001a9b60 "“data.syncdb” couldn’t be removed because you don’t have permission to access it."
    2/27/10 3:26:06 PM SyncServer[769] [124bf0] |DataManager|Error| Unable to remove /Users/rr152510/Library/Application Support/SyncServices/Local/TFSM/com.apple.MailAccounts/data.syncdb, error was Error Domain=NSCocoaErrorDomain Code=513 UserInfo=0x100505d60 "“data.syncdb” couldn’t be removed because you don’t have permission to access it."
    2/27/10 3:26:06 PM SyncServer[769] [124bf0] |DataManager|Error| Unable to remove /Users/rr152510/Library/Application Support/SyncServices/Local/TFSM/com.apple.MailConfiguration/data.syncdb, error was Error Domain=NSCocoaErrorDomain Code=513 UserInfo=0x10051a990 "“data.syncdb” couldn’t be removed because you don’t have permission to access it."
    2/27/10 3:26:06 PM SyncServer[769] [124bf0] |DataManager|Error| Unable to remove /Users/rr152510/Library/Application Support/SyncServices/Local/TFSM/com.apple.Notes/data.syncdb, error was Error Domain=NSCocoaErrorDomain Code=513 UserInfo=0x100525f70 "“data.syncdb” couldn’t be removed because you don’t have permission to access it."
    2/27/10 3:26:06 PM SyncServer[769] [124bf0] |DataManager|Error| Unable to remove /Users/rr152510/Library/Application Support/SyncServices/Local/TFSM/com.apple.Preferences/data.syncdb, error was Error Domain=NSCocoaErrorDomain Code=513 UserInfo=0x100511a90 "“data.syncdb” couldn’t be removed because you don’t have permission to access it."
    2/27/10 3:26:06 PM SyncServer[769] [124bf0] |DataManager|Error| Unable to remove /Users/rr152510/Library/Application Support/SyncServices/Local/TFSM/com.microsoft.entourage.Notes/data.syncdb, error was Error Domain=NSCocoaErrorDomain Code=513 UserInfo=0x10051b3d0 "“data.syncdb” couldn’t be removed because you don’t have permission to access it."
    2/27/10 3:26:06 PM SyncServer[769] [124bf0] |DataManager|Warning| resetSyncData: could not delete data reference directory: /Users/rr152510/Library/Application Support/SyncServices/Local/DataReferences
    2/27/10 3:26:09 PM AddressBookSync[787] [111540] |ISDRecordStore|Error| Unable to delete data reference <ISDDataWrapper[7621EBDE-367B-474C-9511-6635DA1605B9.com.apple.AddressBook.data ] signature=<23c511c0 58a95e02 5f839cb7 3e5a070a 104d4f50>>, error was Error Domain=ISyncDataWrapperError Code=2 "The operation couldn’t be completed. (ISyncDataWrapperError error 2.)"
    2/27/10 3:26:09 PM com.apple.syncservices.SyncServer[786] 2010-02-27 15:26:09.404 AddressBookSync[787:903] [111540] |ISDRecordStore|Error| Unable to delete data reference <ISDDataWrapper[7621EBDE-367B-474C-9511-6635DA1605B9.com.apple.AddressBook.data ] signature=<23c511c0 58a95e02 5f839cb7 3e5a070a 104d4f50>>, error was Error Domain=ISyncDataWrapperError Code=2 "The operation couldn’t be completed. (ISyncDataWrapperError error 2.)"
    2/27/10 3:26:10 PM AddressBookSync[787] [111540] |Miscellaneous|Error| SyncServices assertion failure (_flushCount > 0) in [ISDClientState enableFlush], /SourceCache/SyncServices2/SyncServices2-578/SyncServices/ISDClientState.m:181 too many enableFlushes
    2/27/10 3:26:10 PM AddressBookSync[787] AddressBookSync (client id: com.apple.AddressBook) error: Exception running AddressBookSync: [ISDClientState enableFlush]: too many enableFlushes
    2/27/10 3:26:10 PM com.apple.syncservices.SyncServer[786] 2010-02-27 15:26:10.964 AddressBookSync[787:903] [111540] |Miscellaneous|Error| SyncServices assertion failure (_flushCount > 0) in [ISDClientState enableFlush], /SourceCache/SyncServices2/SyncServices2-578/SyncServices/ISDClientState.m:181 too many enableFlushes
    2/27/10 3:26:10 PM com.apple.syncservices.SyncServer[786] 2010-02-27 15:26:10.976 AddressBookSync[787:903] AddressBookSync (client id: com.apple.AddressBook) error: Exception running AddressBookSync: [ISDClientState enableFlush]: too many enableFlushes
    After running sudo chown -R rr152510 SyncServices/ from terminal I received the same errors. So either its not following through the sub-directories and changing the owner, or its something else. Gonna try it from the root user, though sudo should have given me permissions to do anything I wanted.

  • Too many simultaneous persistent searches

    The Access Manager (2005Q1) in our deployment talks to load-balanced Directory Server instances and as recommended by Sun we have set the value of the property com.sun.am.event.connection.idle.timeout to a value lower than load-balancer timeout.
    However on enabling this property, we see the following error messages in the debug log file amEventService:
    WARNING: EventService.processResponse() - Received a NULL Response. Attempting to re-start persistent searches
    EventService.processResponse() - received DS message => [LDAPMessage] 85687 SearchResult {resultCode=51, errorMessage=too many simultaneous persistent searches}
    netscape.ldap.LDAPException: Error result (51); too many simultaneous persistent searches; LDAP server is busy
    Any idea - why this occurs?
    Do we need to modify the value associated with the attribute nsslapd-maxpsearch ?
    How many Persistent searches does Access Manager fire on the Directory Server ? Can this be controlled ?
    TIA,
    Chetan

    I am having an issue where the Access Manager does not seem to fire any persistent searches at all to the DS.
    We have disabled properties which try to disable certain types of persistent searches and hence in reality there should be lots of persistent searches being fired to the DS.
    Also, there does seem to be some communication between the DS and the Access Manager instance. ....as the AM instance we work on talks only to a particular DS instance. But they do not seem to find any persistent searches being fired from our side at all....the only time they did see some persistent searches was when I did a persistent search from the command line.
    What could be the issue??
    thanks
    anand

  • WL10 Compiler executable.exec error "too many open files" deploying ear

    When I try to deploy an ear containing a web module and some ejb modules, I obtain this error:
    <Info> <J2EE Deployment SPI> <BEA-260121> <Initiating deploy operation for application, MB-ADM_EAR [archive: /wl/wlments/MB-ADM_EAR/MB-ADM_EAR.ear], to Cluster1 .>
    Task 1 initiated: [Deployer:149026]deploy application MB-ADM_EAR on Cluster1.
    Task 1 failed: [Deployer:149026]deploy application MB-ADM_EAR on Cluster1.
    Target state: deploy failed on Cluster Cluster1
    java.io.IOException: Compiler failed executable.exec:
    /wl/servers/MS1/cache/EJBCompilerCache/-1dj0waj53cbu8/it/apps/ejbs/core/ExSvc_167qnt_Impl.java:17: error while writing it.apps.ejbs.core.ExSvc_167qnt_Impl: /wl/servers/MS1/cache/EJBCompilerCache/-1dj0waj53cbu8/it/apps/ejbs/core/ExSvc_167qnt_Impl.class (Too many open files)If i split the ear in two parts, web in one ear and ejbs in another ear, deploy is succesfull.
    Do you have any idea of what is happening?
    Below the environment specifications:
    JVM Version: jrockit_150_11
    JVM Heap: 512
    Web Logic: 10.0.1.0
    Server: Hewlett Packard DL585 G2
    OS: Linux / 2.6.5-7.287.3.PTF.345489.1-smp
    Thank you, bye,
    Marco

    Hello Marco.
    When you try to deploy an EAR weblogic server at the time of deployment unjar's it and compiles the files and so on. Every Operating system has a limit on how many number of files can be opened by a process. If your EAR is big then the number of files which WLS will unjar will also be large hence you hit the limit. By splitting your ear into 2, you are splitting wls task into smaller parts, which means the number of files it unjars at a time is less.
    The following note tells what needs to be done to avert this issue.
    http://download.oracle.com/docs/cd/E12839_01/doc.1111/e14772/weblogic_server_issues.htm#CHDGFFHD

  • WLS 92MP1: Application Poller issue Too many open files

    Hi,
    We have a wls92mp1 domain on linux AS4(64bit) with Sun jdk 1.5.0_14. It contains only Admin server where we have deployed the application. Over a period of time the server start showing up below message in the logs. We have not deployed the application from autodeploy directory. And the file "/home/userid/wls92/etg/servers/userid_a/cache/.app_poller_lastrun " is available in the location, still it throws FileNotFoundException.
    <Error> <Application Poller> <BEA-149411> <I/O exception encountered java.io.FileNotFoundException: /home/userid/wls92/etg/servers/userid_a/cache/.a
    pp_poller_lastrun (Too many open files).
    java.io.FileNotFoundException: /home/userid/wls92/etg/servers/userid_a/cache/.app_poller_lastrun (Too many open files)
    at java.io.FileOutputStream.open(Native Method)
    at java.io.FileOutputStream.<init>(FileOutputStream.java:179)
    at java.io.FileOutputStream.<init>(FileOutputStream.java:131)
    at java.io.FileWriter.<init>(FileWriter.java:73)
    at weblogic.management.deploy.GenericAppPoller.setLastRunFileMap(GenericAppPoller.java:423)
    Any help regarding this would be highly appreciated.
    Thanks.

    Hi,
    By above seeing in error, this error code (BEA-149411) describe the following
    149411: I/O exception encountered {0}.
    L10n Package: weblogic.management.deploy.internal
    I18n Package: weblogic.management.deploy.internal
    Subsystem: Application Poller
    Severity: Error
    Stack Trace: true
    Message Detail: An I/O exception denotes a failure to perform a read/write operation on the application files.
    Cause: An I/O exception can occur during file read/write while deploying an application.
    Action: Take corrective action based on the exception message details.
    i think it helps u.
    -abhi

  • "Bad Data, too many results for shortname"

    Every 10 minutes since October 13 the wikid error log has recorded the following error message:
    *"Bad Data, too many results for shortname: username"*, always naming the same user, who is not actually doing anything.
    Coinciding with that message at the same time every 10 minutes, the system log records 13 or 14 of the following:
    *"Python\[15645\]: ••• -\[NSAutoreleasePool release]: This pool has already been released, do not drain it (double release)."*
    15645 is a process owned by _teamserver, which would appear to confirm the connection between the python and wikid error messages.
    Last clue: The messages (I have determined in hindsight) started while I was setting up the server to do vacation messages. The user named in the "Bad data" messages was the user who wanted to (and did) set up a vacation message at that time. They worked, too.
    Anyone have any ideas about what is going on and how to fix it? BTW, google does not find even one page where this "Bad data" error message is mentioned.

    Thanks for your response. To answer your questions:
    +Are you using AD for your directory?+ No. OD master only.
    +Are there users with duplicate shortnames in your directory system?+ No, and to double check, I searched with WGM and only the one came up.
    +As a bit of background, the wiki server keeps an private index of user and group information so we can track user preferences, and the "Bad Data, too many results for shortname: username" error is triggered when we do a lookup in that index for a particular user and we (unexpectedly) hit duplicate entries. Are you seeing an issue with using the software, or just an annoying log message?+ It's hard to say (for me) what might be related. The directory or wiki related issues with the server include:
    • A memory issue with slapd that eventually causes it to crash. (preceded by lots of "bdbdbcache: db_open(+various: sn, displayname, givenname, mail, maybe others+) failed: Cannot allocate memory (12)" and "logging region out of memory" and "index_param failed" errors.
    • The wiki is slow, despite very light use.
    • Wake from sleep (network clients with authentication required) can be very slow. Several minutes even.
    Any suggestions you may have would be appreciated.

  • IChat Error: "attempted to login too many times" problem/solution

    *I don't know if its okay to post a new topic/question then answer it, but i was having this issue and finally found a resolution so i wanted to share it with everyone.*
    _*Your problem:*_ You Log into ichat then it suddenly begins to act crazy- logging in and out multiple times for no apparent reason... then its gives you the message, yes THE message "You have attempted to login too often in a short period of time. Wait a few minutes before trying to login again".
    You silently think to yourself, "w.t.f". lol.
    Okay, well different things worked for different people, it just depends on the time, the moment, and whether or not ichat wants to act right...
    SOLUTIONS:
    1. Change to SSL
    Go to iChat Menu --> Preferences --> Accounts --> Server Settings --> then check "Use SSL"
    2. Use Port 443
    iChat--> Preferences--> Server Settings --> Use port "443"
    3. Delete the user from my ichat accounts, by pressing the minus sign. Then waiting a couple minutes just to make sure that I would not get the logged in too many times error and re added the account.
    *4. THIS SHOULD MOST Definitely WORK:
    1.) Quit iChat if it is already open, and then open Finder.
    2.) Browse to your user folder (can be found by clicking on your main hard drive, then going to Users > Your Username.
    3.) Open the Library folder and go to the Preferences directory.
    4.) You’ll see a long list of files with a .plist extension. The one we’re looking for is com.apple.iChat.AIM.plist. When you find it, delete it by dragging the file to the trash.
    5.) Launch iChat and this time you should be able to sign in to your AIM account successfully. iChat will automatically rebuild the preference file you deleted.
    _*More INFO:*_
    Now some people may wonder "Why is this happening to me" I'll tell you why:
    The cause of the issue most likely roots from using your Mac at multiple locations (such as home, school, or work), which could lead to the preference files somehow getting corrupted. It sounds serious, but really shouldn't take more than a minute or so to resolve. Just follow the above steps...
    I wish I could take credit for all these, btu I just pulled them from different sites and forums, including forums.macrumers.com and macyourself.com
    I hope this helps someone out there Good Luck.
    Oh and feel free to add other ways to fix this weird issue.

    Also look at http://discussions.apple.com/thread.jspa?threadID=1462798
    The Apple Article on the related Issue
    http://support.apple.com/kb/TS1643
    10:24 PM Saturday; June 6, 2009
    Please, if posting Logs, do not post any Log info after the line "Binary Images for iChat"
    Message was edited by: Ralph Johns (UK)
    Corrected link

  • What have "Too many open Files" to do with FIFOs?

    Hi folks.
    I've just finished a middleware service for my company, that receives files via a TCP/IP connection and stores them into some cache-directory. An external program gets called, consumes the files from the cache directory and puts a result-file there, which itself gets sent back to the client over TCP/IP.
    After that's done, the cache file (and everything leftover) gets deleted.
    The middleware-server is multithreaded and creates a new thread for each request connection.
    These threads are supposed to die when the request is done.
    All works fine, cache files get deleted, threads die when they should, the files get consumed by the external program as expected and so on.
    BUT (there's always a butt;) to migrate from an older solution, the old data gets fed into the new system, creating about 5 to 8 requests a second.
    After a time of about 20-30 minutes, the service drops out with "IOException: Too many open files" on the very line where the external program gets called.
    I sweeped through my code, seeking to close even the most unlikely stream, that gets opened (even the outputstreams of the external process ;) but the problem stays.
    Things I thought about:
    - It's the external program: unlikely since the lsof-command (shows the "list of open files" on Linux) says that the open files belong to java processes. Having a closer look at the list, I see a large amount of "FIFO" entries that gets bigger, plus an (almost) constant amount of "normal" open file handles.
    So perhaps the handles get opened (and not closed) somehwere else and the external program is just the drop that makes the cask flood over.
    - Must be a file handle that's not closed: I find only the "FIFO" entries to grow. Yet I don't really know what that means. I just think it's something different than a "normal" file handle, but maybe I'm wrong.
    - Must be a socket connection that's not closed: at least the client that sends requests to the middleware service closes the connection properly, and I am, well, quite sure that my code does it as well, but who knows? How can I be sure?
    That was a long description, most of which will be skipped by you. To boil it down to some questions:
    1.) What do the "FIFO" entries of the lsof-command under Linux really mean ?
    2.) How can I make damn sure that every socket, stream, filehandle etc. pp. is closed when the worker thread dies?
    Answers will be thanked a lot.
    Tom

    Thanks for the quick replies.
    @BIJ001:
    ls -l /proc/<PID>/fdGives the same information as lsof does, namely a slowly but steadily growing amount of pipes
    fuserDoesn't output anything at all
    Do you make exec calls? Are you really sure stdout and stderr are consumed/closed?Well, the external program is called by
    Process p = Runtime.getRuntime().exec(commandLine);and the stdout and stderr are consumed by two classes that subclass Thread (named showOutput) that do nothing but prepending the corresponding outputs with "OUT:" and "ERR" and putting them into a log.
    Are they closed? I hope so: I call the showOutput's halt method, that should eventually close the handles.
    @sjasja:
    Sounds like a pipe.Thought so, too ;)
    Do you have the waitFor() in there?Mentioning the waitFor():
    my code looks more like:
    try  {
         p = Runtime.getRuntime.exec(...);
         outshow = new showOutput(p.getInputStream(), "OUT").start;
         errshow = new showOutput(p.getErrorStream(), "ERR").start;
         p.waitFor();
    } catch (InterruptedException e) {
         //can't wait for process?
         //better go to sleep some.
         log.info("Can't wait for process! Going to sleep 10sec.");
         try{ Thread.sleep(10000); } catch (InterruptedException ignoreMe) {}
    } finally {
         if (outShow!=null) outShow.halt();
         if (errShow!=null) errShow.halt();
    /**within the class showOutput:*/
    /**This method gets called by showOutput's halt:*/
    public void notifyOfHalt() {
         log.debug("Registered a notification to halt");
         try {
              myReader.close(); //is initialized to read from the given InputStream
         } catch (IOException ignoreMe) {}
    }Seems as if the both of you are quite sure that the pipes are actually created by the exec command and not closed afterwards.
    Would you deem it unlikely that most of the handles are opened somewhere else and the exec command is just the final one that crashes the prog?
    That's what I thought.
    Thanks for your time
    Tom

Maybe you are looking for

  • Order of operations

    Hi guys I'm a noob in this field of using hints I cannot figure how to create the exact execution plan oracle sugests GENERAL INFORMATION SECTION Tuning Task Name   : sql_id_tuning_task_SD Tuning Task Owner  : APMIG Workload Type      : Single SQL St

  • Cascade Delete thru Cross-Reference Table?

    I'm setting up cascade delete rules in my database and I wonder if there's a way to cascade delete through a cross-reference table? I have a table: "projects" that can have many "tasks" (also a table). A "participation" table is the cross-reference b

  • Using bigdecimal class

    I was using Gregory-Leibniz series to calculate PI = 4 - 4/3 + 4/5 - 4/7 + 4/9 - 4/11 + ... Something like: double pi = 0.0;              int limit = 3000000;              for (int i = 0, y = 1; i <= limit; y+=2, i++)                  if (y == 1)    

  • How can i make Apple id free for downloading apps

    how can i make free apple ID for downloading apps?

  • Enter_qurery/execute_query... its very URGENT!!!!

    hi sir/madam, i have a problem with execute_query...ENTER_QUERY is working fine...But EXECUTE_QUERY not giving any response...the code written for both are given below... first trigger go_block(<block name); enter_query; second trigger go_block(<bloc