Not-cleaned-up-DB-sessions Forms6i/IAS

Environment: True Unix 64, IAS 1.0.2.1, 8.1.6
Since migrated to Forms 6i /IAS 1.0.2.1 there are in some cases not cleaned-up DB session on the db which is accessed by an Forms6i appl. Note: not the Webserver has this not-closed-sessions but the Oracle-database on a remote UNIX server (8.0.5). There I see these sessions with cursor-information:
ALTER SESSION SET NLS_LANGUAGE='AMERICAN' NLS_TERRITORY='AMERICA' NLS_CURRENCY='$' NLS_ISO_CURRENCY='AMERICA' NLS_NUMERIC_CHARACTERS='.,' NLS_CALENDAR='GREGORIAN' NLS_DATE_FORMAT='DD-MON-RR' NLS_DATE_LANGUAGE='AMERICAN' NLS_SORT='BINARY'
Is there any parameter which helps me to kill these needless sessions automatically ? And, what action makes Oracle to behave to keep these sessions ?
Thanx a lot.

Please reread your post with great care.
Do you see a platform and operating system?
Do you see a product name and full version number?
Do you see a description of which commands were executed?
Good. Because I don't either.
Post a full and complete description of the environment so we can try to replicate it.

Similar Messages

  • PMON not cleaning user's session

    MON not cleaning user sessions when client’s terminals suddenly power off. Approximately 200 clients are connecting to database from another location via ppt pridg. when power off at that side all con-current session headed at server side in 10g rel2. PMON not clean hanged session. And new users unable to connecting, listener refused not established new connection.
    How i can solve this problem
    Thanks in Advance

    Fri Feb 15 10:13:05 2008
    db_recovery_file_dest_size of 15360 MB is 0.00% used. This is a
    user-specified limit on the amount of space that will be used by this
    database for recovery-related files, and does not reflect the amount of
    space available in the underlying filesystem or ASM diskgroup.
    Fri Feb 15 10:13:05 2008
    Completed: alter database open
    Fri Feb 15 10:18:56 2008
    Shutting down archive processes
    Fri Feb 15 10:19:01 2008
    ARCH shutting down
    ARC3: Archival stopped
    Fri Feb 15 10:25:01 2008
    Process startup failed, error stack:
    Fri Feb 15 10:25:01 2008
    Errors in file e:\oracle\product\10.2.0\admin\orcl\bdump\orcl_psp0_3576.trc:
    ORA-27300: OS system dependent operation:spcdr:9261:4200 failed with status: 997
    ORA-27301: OS failure message: Overlapped I/O operation is in progress.
    ORA-27302: failure occurred at: skgpspawn
    Fri Feb 15 10:25:02 2008
    Process J000 died, see its trace file
    Fri Feb 15 10:25:02 2008
    kkjcre1p: unable to spawn jobq slave process
    Fri Feb 15 10:25:02 2008
    Errors in file e:\oracle\product\10.2.0\admin\orcl\bdump\orcl_cjq0_3688.trc:
    Fri Feb 15 10:39:47 2008
    Process startup failed, error stack:
    Fri Feb 15 10:39:47 2008
    Errors in file e:\oracle\product\10.2.0\admin\orcl\bdump\orcl_psp0_3576.trc:
    ORA-27300: OS system dependent operation:spcdr:9261:4200 failed with status: 997
    ORA-27301: OS failure message: Overlapped I/O operation is in progress.
    ORA-27302: failure occurred at: skgpspawn
    Fri Feb 15 10:39:48 2008
    Process J000 died, see its trace file
    Fri Feb 15 10:39:48 2008
    kkjcre1p: unable to spawn jobq slave process
    Fri Feb 15 10:39:48 2008
    Errors in file e:\oracle\product\10.2.0\admin\orcl\bdump\orcl_cjq0_3688.trc:
    Fri Feb 15 10:40:13 2008
    Process startup failed, error stack:
    Fri Feb 15 10:40:13 2008
    Errors in file e:\oracle\product\10.2.0\admin\orcl\bdump\orcl_psp0_3576.trc:
    ORA-27300: OS system dependent operation:spcdr:9261:4200 failed with status: 997
    ORA-27301: OS failure message: Overlapped I/O operation is in progress.
    ORA-27302: failure occurred at: skgpspawn
    Fri Feb 15 10:40:14 2008
    Process J000 died, see its trace file
    Fri Feb 15 10:40:14 2008
    kkjcre1p: unable to spawn jobq slave process
    Fri Feb 15 10:40:14 2008
    Errors in file e:\oracle\product\10.2.0\admin\orcl\bdump\orcl_cjq0_3688.trc:

  • Firefox will not open my previous session

    Firefox 4 will not open any previous session once I shut down. I can have say 5 tabs open, close firefox, and reopen it and the session was saved. If I have those same 5 tabs open and shut down, then turn on my computer the next day, I can click on firefox, it will highlight the icon, and the highlight will disappear after about 5 seconds, and then I have to click it again, and it will open a totally new session. How do I fix this?

    Make sure that you do not use [[Clear Recent History]] to clear the "Browsing History" if Firefox is closed.

  • Error in running a query in XSJS - column store error: [2950] user is not authorized :  at ptime/session/dist/RemoteQueryExecution.cc:1354]

    Hi All,
    I get the below error when i load my xsjs file in browser,
    Error while executing query: [dberror(PreparedStatement.executeQuery): 2048 - column store error: column store error: [2950] user is not authorized : at ptime/session/dist/RemoteQueryExecution.cc:1354]
    I am able to execute the same query in  HANA SQL editor
    Please note that ,No Analytical privileges are applied to the view.
    Could you please help solving this issue.
    Regards,
    Logesh Kumar.

    Hay,
    are you using the same Database user for both SQL Editor and XSJS ?
    try the following.
    Before executing the query , display it and copy the same from browser and execute in SQL editor.
    Put the statement in  a try catch block and analyse the exception .
    Sreehari

  • Is not defined in this session

    SELECT cycle_flag, sequence_name FROM ALL_SEQUENCES
    I picked 'record_SEQ' sequence from the listing.
    select record_SEQ.Currval from dual; ==> ORA-08002: sequence record_SEQ.Currval is not defined in this session.
    where is the problem ?

    As per the link you posted it says , this error comes up when there is no value >...but I can see the last value in the sequence is 20632 (using TOAD's sequence tab and highlight the particular sequence).
    So, This error is confusing
    Total Questions: 3 (3 unresolved) ?????one answered and Marked as answered.
    Edited by: user575089 on Dec 23, 2009 12:47 AM

  • Failover cluster not cleanly shutting down service

    I've got a two node 2008 R2 failover cluster.  I have a single service being managed by it that I configured just as a generic service.  The failover works perfectly when the service is stopped, or when one of the machines goes down, and the immediate
    failback I have configured works perfectly in both scenarios as well.
    However, there's an issue when I take the networking down on the preferred owner of the service.  As far as I can tell (this is the first time I've tried failover clustering, so I'm learning), when I take the networking down, the cluster service shuts
    down, and in turn shuts down the service I've told it to manage.  At this point, when the services aren't running, the service fails over to the secondary as intended.  The problem shows up when I turn the networking back on.  The service tries
    and fails to start on the primary (as many times as I've configured it to try), and then eventually gives up and goes back to the secondary.
    The reason for this, examining logs for the service, is that the required port is already in use.  I checked some more, and sure enough, when I take the networking offline the service gets shut down, but the executable is still running.  This is
    repeatable every time.  When I just stop the service, though, the executables go away.  So it's something to do specifically with how the managed service gets shut down *when it's shut down due to the cluster service stopping*.  For some reason
    it's not cleaning up that associated executable.
    Any ideas as to why this is happening and how to fix/work around it would be extremely welcome.  Thank you!

    Try to generate cluster log using closter log /g /copy:<path to a local folder>. You might need to bump up log verbosity using cluster /prop ClusterLogLevel=5 (you can check current level using cluster /prop).
    You also can look at the SCM diagnostic channel in the event viewer. Start eventvwr. Wait for the clock icon on the Application and Services Logs to go away. Once the clock icon is gone select this entry and in the menu check Show Analytic and Debug Logs.
    Now expand to the SCM provider located at
    Application and Services Logs\Microsoft\Service Control Manager Performance Diagnostic Provider\Diagnostic.
    or Microsoft-Windows-Services/Diagnostic
    Enable the log, run repro, disable the log. After that you should see events from the SCM showing you your service state transitions.
    The terminate parameters do not seems to be configurable. I can think of two ways fixing the issue
    - Writing your own cluster resource DLL where you can implement your own policies. THis would be a place to start http://blogs.msdn.com/b/clustering/archive/2010/08/24/10053405.aspx.
    - This option is assuming you cannot change the source code of the service to kill orphaned child processes on startup so you have to clenup using some other means. Create another service and make your service dependent on this new service. This new serice
    must be much faster in responding do the SCM commands. On start of this service you using PSAPI enumirate all processes running on the machine and kill the orphaned child processes. You probably should be able to acheve something similar using GenScript resource
    + VB script that does the cleanup.
    Regards, Vladimir Petter, Microsoft Corporation

  • JMF error - Format of Stream not supported in RTP Session Manager

    java.io.IOException: Format of Stream not supported in RTP Session Manager
    at com.sun.media.datasink.rtp.Handler.open(Handler.java:139)
    why this erro occors?
    I already created the DataSink.
    When I try to do this...
    dsk.open(); //here the error got
    dsk.start();     Code of server of media
    I want to sent audio (wav) like a radio, but from file. Without stop to send streaming. PullBufered
    *Class Server that you offers Streaming of midia
    public class Servidor {
    private MediaLocator ml;
    private Processor pro;
    private javax.media.protocol.DataSource ds;
    private DataSink dsk;
    private boolean codificado = false;
    //start the server service, passing the adress of media
    // ex: d:\music\music.wav
    // pass the ip and port, to make a server works
    public void iniciarServicoServidor(String end,String ip, int porta)
    try {
    //capture media
    capturarMidia(end);
    //creates processor
    criarProcessor();
    // configure the processor
    configurarProcessor();
    //setContent RAW
    descreverConteudoEnviado();
    //format the media in right RTP format
    formatRTP();
    //creat the streaming
    criarStreaming();
    //configure the server
    configurarServidor(ip, porta);
    //in this method raise the excepition
    iniciarServidor();
    //when I try to open the DataSink.open() raises the exception
    //java.io.IOException: Format of Stream not supported in RTP Session //Manager
    // at com.sun.media.datasink.rtp.Handler.open(Handler.java:139)
    } catch (RuntimeException e) {
    System.out.println("Houve um erro em iniciarServicoServidor");
    e.printStackTrace();
    public void capturarMidia(String endereco)
    try {
    System.out.println("**************************************************************");
    System.out.println("Iniciando processo de servidor de multimidia em " + Calendar.getInstance().getTime().toString());
    ml = new MediaLocator("file:///" + endereco);
    System.out.println("Midia realizada com sucesso.");
    System.out.println ("[" + "file:///" + endereco +"]");
    } catch (RuntimeException e) {
    System.out.println("Houve um erro em capturarMidia");
    e.printStackTrace ();
    public void criarProcessor()
    try {
    System.out.println("**************************************************************");
    pro = Manager.createProcessor(ml);
    System.out.println("Processor criado com sucesso.");
    System.out.println("Midia com durcao:" + pro.getDuration().getSeconds());
    } catch (NoProcessorException e) {
    System.out.println("Houve um erro em criarProcessor");
    e.printStackTrace();
    } catch (IOException e) {
    System.out.println ("Houve um erro em criarProcessor");
    e.printStackTrace();
    public void configurarProcessor()
    try {
    System.out.println("**************************************************************");
    System.out.println("Processor em estado de configura��o.");
    pro.configure();
    System.out.println("Processor configurado.");
    } catch (RuntimeException e) {
    System.out.println("Houve um erro em configurarProcessor");
    e.printStackTrace();
    public void descreverConteudoEnviado()
    try {
    System.out.println("**************************************************************");
    pro.setContentDescriptor(new ContentDescriptor(ContentDescriptor.RAW));
    System.out.println("Descritor de conteudo:" + pro.getContentDescriptor().toString());
    } catch (NotConfiguredError e) {
    System.out.println("Houve um erro em descreverConteudoEnviado");
    e.printStackTrace();
    private Format checkForVideoSizes(Format original, Format supported) {
    int width, height;
    Dimension size = ((VideoFormat)original).getSize();
    Format jpegFmt = new Format(VideoFormat.JPEG_RTP);
    Format h263Fmt = new Format(VideoFormat.H263_RTP);
    if (supported.matches(jpegFmt)) {
    // For JPEG, make sure width and height are divisible by 8.
    width = (size.width % 8 == 0 ? size.width :
    (int)(size.width / 8) * 8);
    height = (size.height % 8 == 0 ? size.height :
    (int)(size.height / 8) * 8);
    } else if (supported.matches(h263Fmt)) {
    // For H.263, we only support some specific sizes.
    if (size.width < 128) {
    width = 128;
    height = 96;
    } else if ( size.width < 176) {
    width = 176;
    height = 144;
    } else {
    width = 352;
    height = 288;
    } else {
    // We don't know this particular format. We'll just
    // leave it alone then.
    return supported;
    return (new VideoFormat(null,
    new Dimension(width, height),
    Format.NOT_SPECIFIED ,
    null,
    Format.NOT_SPECIFIED)).intersects(supported);
    public void formatRTP()
    try {
    // Program the tracks.
    TrackControl tracks[] = pro.getTrackControls();
    Format supported[];
    Format chosen;
    for (int i = 0; i < tracks.length; i++) {
    Format format = tracks.getFormat();
    if (tracks[i].isEnabled()) {
    supported = tracks[i].getSupportedFormats();
    // We've set the output content to the RAW_RTP.
    // So all the supported formats should work with RTP.
    // We'll just pick the first one.
    if (supported.length > 0) {
    if (supported[0] instanceof VideoFormat) {
    // For video formats, we should double check the
    // sizes since not all formats work in all sizes.
    chosen = checkForVideoSizes(tracks[i].getFormat(),
    supported[0]);
    } else
    chosen = supported[0];
    tracks[i].setFormat(chosen);
    System.err.println("Track " + i + " is set to transmit as:");
    System.err.println(" " + chosen);
    codificado = true;
    } else
    tracks[i].setEnabled(false);
    } else
    tracks[i].setEnabled(false);
    } catch (RuntimeException e) {
    // TODO Auto-generated catch block
    e.printStackTrace();
    public void tocar()
    pro.start();
    public void criarStreaming()
    try {
    System.out.println("**************************************************************");
    if (codificado)
    System.out.println("Midia codificada...");
    System.out.println("Processor entra em estado de realize.");
    pro.realize();
    System.out.println("Processor realized.");
    System.out.println("Adquirindo o streaming a ser enviado.");
    ds = pro.getDataOutput();
    System.out.println("Streaming adquirido pronto a ser enviado.");
    } catch (NotRealizedError e) {
    System.out.println("Houve um erro em criarStreaming");
    System.out.println(e.getMessage());
    e.printStackTrace();
    catch (Exception e) {
    System.out.println(e.getMessage());
    public void configurarServidor(String ip, int porta)
    System.out.println("**************************************************************");
    String url = "rtp://" + ip + ":" + porta + "/audio/1";
    System.out.println("Servidor ira atender em " + url);
    MediaLocator mml = new MediaLocator(url);
    System.out.println("Localizador de midia ja criado");
    try {
    System.out.println("Criando um DataSink a ser enviado.");
    dsk = Manager.createDataSink(ds, mml);
    System.out.println("DataSink criado.");
    } catch (NoDataSinkException e) {
    e.printStackTrace();
    public void iniciarServidor()
    try {
    System.out.println("**************************************************************");
    dsk.open();
    System.out.println("Servidor ligado.");
    dsk.start();
    System.out.println("Servidor iniciado.");
    } catch (SecurityException e) {
    e.printStackTrace();
    } catch (IOException e) {
    e.printStackTrace();
    Gives that output console.
    All methods are executed but the last doesnt works.
    The method that open the DataSink.
    What can I do?
    Iniciando processo de servidor de multimidia em Sun May 13 22:37:02 BRT 2007
    Midia realizada com sucesso.
    [file:///c:\radio.wav ]
    Processor criado com sucesso.
    Midia com durcao:9.223372036854776E9
    Processor em estado de configura��o.
    Processor configurado.
    Descritor de conteudo:RAW
    Midia codificada...
    Processor entra em estado de realize.
    Processor realized.
    Adquirindo o streaming a ser enviado.
    Streaming adquirido pronto a ser enviado.
    Servidor ira atender em rtp://127.0.0.1:22000/audio/1
    Localizador de midia ja criado
    Criando um DataSink a ser enviado.
    streams is [Lcom.sun.media.multiplexer.RawBufferMux$RawBufferSourceStream;@a0dcd9 : 1
    sink: setOutputLocator rtp://127.0.0.1:22000/audio/1
    DataSink criado.
    Track 0 is set to transmit as:
    unknown, 44100.0 Hz, 16-bit, Stereo, LittleEndian, Signed, 176400.0 frame rate, FrameSize=32 bits
    java.io.IOException: Format of Stream not supported in RTP Session Manager
    at com.sun.media.datasink.rtp.Handler.open(Handler.java:139)
    at br.org.multimidiasi.motor.Servidor.iniciarServidor(Servidor.java:291)
    at br.org.multimidiasi.motor.Servidor.iniciarServicoServidor(Servidor.java:43)
    at br.org.multimidiasi.motor.ConsoleServidor.main(ConsoleServidor.java:30)
    Since already thanks so much.
    Exactally in this method raises erros.
    Ive tried another formats (avi, mp3) but all with the same error, what I can do?
    [code] public void iniciarServidor()
    try {
    System.out.println("**************************************************************");
    dsk.open();
    System.out.println("Servidor ligado.");
    dsk.start();
    System.out.println("Servidor iniciado.");
    } catch (SecurityException e) {
    e.printStackTrace();
    } catch (IOException e) {
    e.printStackTrace();
    Track 0 is set to transmit as:
    unknown, 44100.0 Hz, 16-bit, Stereo, LittleEndian, Signed, 176400.0 frame rate, FrameSize=32 bits
    java.io.IOException: Format of Stream not supported in RTP Session Manager
    at com.sun.media.datasink.rtp.Handler.open(Handler.java:139)
    at br.org.multimidiasi.motor.Servidor.iniciarServidor(Servidor.java:291)
    at br.org.multimidiasi.motor.Servidor.iniciarServicoServidor(Servidor.java:43)
    at br.org.multimidiasi.motor.ConsoleServidor.main(ConsoleServidor.java:30)

    unknown, 44100.0 Hz, 16-bit, Stereo,
    LittleEndian, Signed, 176400.0 frame rate,
    FrameSize=32 bits
    java.io.IOException: Format of Stream not supported
    in RTP Session Manager
    The fact that it doesn't know what the format is
    might have to do with the problem. I've had similar
    problems, and I've traced it back to missing jars and
    codecs. Have you tried running the same code locally
    without the transmission to see if you player will
    even play the file?Already and it works, I used Player to play it and play normally, I try to make it with the diferents codecs of audio and video, but no sucess.

  • Processes in v$process which do not exist in v$session(help)!!

    Hi, all.
    The database is 2 node RAC database (10.2.0.2.0)
    on 32 bit windows 2003 EE SP1.
    Our database is suffering "CKPT hang" from time to time.
    I checked v$process and v$session on both node by the following sql.
    select addr,pid,spid,username, program
    from v$process
    where addr not in (select paddr from v$session)
    ADDR     PID     SPID     USERNAME     PROGRAM
    56E2     1               PSEUDO
    56E2     18     3984     SYSTEM     ORACLE.EXE (D000)
    56E2     19     4020     SYSTEM     ORACLE.EXE (S000)
    56E2     27     3176     SYSTEM     ORACLE.EXE (PZ99)
    56E3     39     2296     SYSTEM     ORACLE.EXE (PZ97)
    ●select * from v$px_process
    SERVER_NAME     STATUS     PID     SPID     SID     SERIAL#
    PZ97     AVAILABLE     39     2296          
    PZ99     AVAILABLE     27     3176          
    ●select * from V$PX_SESSION
    --> no rows
    ●select slave_name,status from v$pq_slave
    SLAVE_NAME     STATUS
    PZ97     IDLE
    PZ99     IDLE
    I found the above processes which do not exist in v$session.
    Is this normal??
    Thanks and Regards.
    Message was edited by:
    user507290

    Hi,
    >>I can see incomplete checkpoint message in alert log files.
    This message indicates that Oracle wants to reuse a redo log file, but the current checkpoint position is still in that log. In this case, Oracle must wait until the checkpoint position passes that log. In fact, the "checkpoint not complete" messages are generated because the logs are switching so fast that the checkpoint associated with the log switch isn't terminated. Oracle stops processing until the checkpoint completes successfully.
    In this case, normally you need to increase the size of the redo logs or add more redo log groups. Take a look on http://www.dbazine.com/oracle/or-articles/foot2
    Now, as you said that the current redo size is 2 GB, how frequently redo log switch's occurs in the database ? See the SQL's below:
    Log Switching Distribution
    select to_char(first_time,'DD/MM/YYYY') day,
    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'00',1,0)),'999') "00",
    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'01',1,0)),'999') "01",
    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'02',1,0)),'999') "02",
    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'03',1,0)),'999') "03",
    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'04',1,0)),'999') "04",
    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'05',1,0)),'999') "05",
    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'06',1,0)),'999') "06",
    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'07',1,0)),'999') "07",
    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'08',1,0)),'999') "08",
    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'09',1,0)),'999') "09",
    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'10',1,0)),'999') "10",
    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'11',1,0)),'999') "11",
    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'12',1,0)),'999') "12",
    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'13',1,0)),'999') "13",
    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'14',1,0)),'999') "14",
    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'15',1,0)),'999') "15",
    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'16',1,0)),'999') "16",
    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'17',1,0)),'999') "17",
    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'18',1,0)),'999') "18",
    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'19',1,0)),'999') "19",
    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'20',1,0)),'999') "20",
    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'21',1,0)),'999') "21",
    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'22',1,0)),'999') "22",
    to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'23',1,0)),'999') "23",
    sum(1) "TOTAL_IN_DAY"
    from v$log_history
    group by to_char(first_time,'DD/MM/YYYY')
    order by to_date(day);
    Average Switch Time
    select to_char(first_time,'DD/MM/YYYY HH24')||':00:00' "Time",
    count(*) "Nbswitch",
    trunc(60/count(*),1) "Average switch time(minutes)",
    count(*)*rsize "EstimateSize"
    from v$log_history, (select sum(bytes/1024/1024)/count(*) rsize from v$log)
    group by to_char(first_time,'DD/MM/YYYY HH24'),rsize
    order by to_date(to_char(first_time,'DD/MM/YYYY HH24'), 'DD/MM/YYYY
    HH24')desc;In addition, If you have access on Oracle Metalink, see the Note:147468.1 about Checkpoint Tuning and Troubleshooting Guide.
    Cheers

  • Processes in v$process that do not exist in v$session (2 node RAC)!

    Hi, all.
    The database is 2 node RAC database (10.2.0.2.0)
    on 32 bit windows 2003 EE SP1.
    Our database is suffering "CKPT hang" from time to time.
    I checked v$process and v$session on both node by the following sql.
    select addr,pid,spid,username, program
    from v$process
    where addr not in (select paddr from v$session)
    ADDR PID SPID USERNAME PROGRAM
    56E2 1 PSEUDO
    56E2 18 3984 SYSTEM ORACLE.EXE (D000)
    56E2 19 4020 SYSTEM ORACLE.EXE (S000)
    56E2 27 3176 SYSTEM ORACLE.EXE (PZ99)
    56E3 39 2296 SYSTEM ORACLE.EXE (PZ97)
    ●select * from v$px_process
    SERVER_NAME STATUS PID SPID SID SERIAL#
    PZ97 AVAILABLE 39 2296
    PZ99 AVAILABLE 27 3176
    ●select * from V$PX_SESSION
    --> no rows
    ●select slave_name,status from v$pq_slave
    SLAVE_NAME STATUS
    PZ97 IDLE
    PZ99 IDLE
    I found the above processes which do not exist in v$session.
    Is this normal??
    Thanks and Regards.

    you see nothing in v$session because there is nothing to see ...
    the Parallel servers are AVAILABLE , ie no session are running parallel executions (as shown in V$PX_SESSION)
    PID is not SID

  • Configuration Manager Error: "Could not clean working directory"

    I am currently attempting to install Adobe LiveCycle Reader Extensions ES on a Windows Server 2003 box with 2G of RAM.<br /><br />I am getting an error during the Configuration Manager phase of the installation. Screen capture available here: http://destructoray.com/albums/album01/Error.jpg<br /><br />The text of the error message is:<br />Error [ACM-LCM-000-000]<br />Failed on 'Executing merge scripts for ..\export\adobe-livecycle-native-jboss-x86_win32.ear'<br />Could not clean working directory<br /><br />Here is the dump from the lcm.0.log:<br />[2007-07-23 11:15:29,344], FINE   , Thread-3, com.adobe.livecycle.lcm.feature.configureLC.MergeEars2, Cleaning working directory<br />[2007-07-23 11:15:29,374], FINE   , Thread-3, com.adobe.livecycle.lcm.feature.configureLC.MergeEars2, Deleting contents [323 files] from: ..\working\mergeTmp<br />[2007-07-23 11:15:29,789], SEVERE , Thread-3, com.adobe.livecycle.lcm.feature.configureLC.MergeEars2, Could not clean working directory<br />com.adobe.livecycle.lcm.core.LCMException[ALC-LCM-000-000]: Could not clean working directory<br />     at com.adobe.livecycle.lcm.feature.configureLC.MergeEars2.cleanDirectory(MergeEars2.java:442 )<br />     at com.adobe.livecycle.lcm.feature.configureLC.MergeEars2.init(MergeEars2.java:304)<br />     at com.adobe.livecycle.lcm.feature.configureLC.MergeEars2.mergeEars(MergeEars2.java:217)<br />     at com.adobe.livecycle.lcm.feature.configureLC.MergeEars.mergeEars(MergeEars.java:227)<br />     at com.adobe.livecycle.lcm.feature.configureLC.MergeEars.mergeNativeEars(MergeEars.java:162) <br />     at com.adobe.livecycle.lcm.feature.configureLC.MergeEars.mergeAllEars(MergeEars.java:99)<br />     at com.adobe.livecycle.lcm.feature.expressTurnkey.ExpressTurnkeyTask$ActualTask.mergeEars(Ex pressTurnkeyTask.java:206)<br />     at com.adobe.livecycle.lcm.feature.expressTurnkey.ExpressTurnkeyTask$ActualTask.<init>(Expre ssTurnkeyTask.java:129)<br />     at com.adobe.livecycle.lcm.feature.expressTurnkey.ExpressTurnkeyTask$1.construct(ExpressTurn keyTask.java:103)<br />     at com.adobe.livecycle.lcm.core.tasks.SwingWorker$2.run(SwingWorker.java:114)<br />     at java.lang.Thread.run(Thread.java:619)<br />Caused by: java.io.IOException: Unable to delete file: ..\working\mergeTmp\base\xbean.jar<br />     at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:1087)<br />     at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:811)<br />     at org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:777)<br />     at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:1079)<br />     at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:811)<br />     at com.adobe.livecycle.lcm.feature.configureLC.MergeEars2.cleanDirectory(MergeEars2.java:434 )<br />     ... 10 more<br />[2007-07-23 11:15:29,789], SEVERE , Thread-3, com.adobe.livecycle.lcm.feature.configureLC.MergeEars, Could not clean working directory<br />com.adobe.livecycle.lcm.core.LCMException[ALC-LCM-000-000]: Could not clean working directory<br />     at com.adobe.livecycle.lcm.feature.configureLC.MergeEars2.cleanDirectory(MergeEars2.java:442 )<br />     at com.adobe.livecycle.lcm.feature.configureLC.MergeEars2.init(MergeEars2.java:304)<br />     at com.adobe.livecycle.lcm.feature.configureLC.MergeEars2.mergeEars(MergeEars2.java:217)<br />     at com.adobe.livecycle.lcm.feature.configureLC.MergeEars.mergeEars(MergeEars.java:227)<br />     at com.adobe.livecycle.lcm.feature.configureLC.MergeEars.mergeNativeEars(MergeEars.java:162) <br />     at com.adobe.livecycle.lcm.feature.configureLC.MergeEars.mergeAllEars(MergeEars.java:99)<br />     at com.adobe.livecycle.lcm.feature.expressTurnkey.ExpressTurnkeyTask$ActualTask.mergeEars(Ex pressTurnkeyTask.java:206)<br />     at com.adobe.livecycle.lcm.feature.expressTurnkey.ExpressTurnkeyTask$ActualTask.<init>(Expre ssTurnkeyTask.java:129)<br />     at com.adobe.livecycle.lcm.feature.expressTurnkey.ExpressTurnkeyTask$1.construct(ExpressTurn keyTask.java:103)<br />     at com.adobe.livecycle.lcm.core.tasks.SwingWorker$2.run(SwingWorker.java:114)<br />     at java.lang.Thread.run(Thread.java:619)<br />Caused by: java.io.IOException: Unable to delete file: ..\working\mergeTmp\base\xbean.jar<br />     at org.apache.commons.io.FileUtils.for

    The above error can occur if you have insufficient rights.
    I faced the same error on Win 2k8 while running the installed using the Local System Admin account.
    The Same was resolved by the following steps:
    Solution:
    Go to the AdobeLivecycleES3 root folder> ConfigurationManager>bin
    Right click on teh configurationManager (BATCH File) and click on "Run as Administrator".

  • Something about "filesystems is NOT Clean"

    I bought Thinkpad X1 recently.  I install arch on it. Everything seems fine, except that "filesystems is NOT Clean" appears everytime when I start the laptop. I cannot find useful information from google. I just guess that it is a problem associated with unnormal shutdown procedure.
    Any suggestions are welcome!
    I use lvm2 to manage logical volumns.
    $ sudo lvs
    LV VG Attr LSize Origin Snap% Move Log Copy% Convert
    home archgroup -wc-ao 255.12g
    root archgroup -wc-ao 19.53g
    swap archgroup -wc-ao 3.91g
    var archgroup -wc-ao 19.53g
    Here, "home" and "root" use the filesystem of "ext4" and "var" use "reisefs"。
    The following is several informations from kernel.log which may be useful:
    Aug 26 14:03:27 Igor-Home kernel: [ 100.249270] ADDRCONF(NETDEV_CHANGE): wlan0: link becomes ready
    Aug 26 14:03:38 Igor-Home kernel: [ 111.193303] wlan0: no IPv6 routers present
    Aug 26 14:04:26 Igor-Home kernel: [ 158.867839] iwlagn 0000:03:00.0: Aggregation not enabled for tid 0 because load = 1
    Aug 26 14:04:30 Igor-Home kernel: [ 162.530151] iwlagn 0000:03:00.0: iwlagn_tx_agg_start on ra = 84:c9:b2:49:55:9d tid = 0
    Aug 26 14:04:48 Igor-Home kernel: [ 180.416263] iwlagn 0000:03:00.0: iwlagn_tx_agg_start on ra = 84:c9:b2:49:55:9d tid = 0
    Aug 26 14:05:30 Igor-Home kernel: [ 222.693282] iwlagn 0000:03:00.0: Aggregation not enabled for tid 0 because load = 7
    Aug 26 14:05:39 Igor-Home kernel: [ 231.802666] iwlagn 0000:03:00.0: Aggregation not enabled for tid 0 because load = 10
    Aug 26 14:05:44 Igor-Home kernel: [ 236.781757] iwlagn 0000:03:00.0: iwlagn_tx_agg_start on ra = 84:c9:b2:49:55:9d tid = 0
    Aug 26 14:06:10 Igor-Home kernel: [ 262.689224] iwlagn 0000:03:00.0: Aggregation not enabled for tid 0 because load = 4
    Aug 26 14:07:04 Igor-Home kernel: [ 316.406812] iwlagn 0000:03:00.0: Aggregation not enabled for tid 0 because load = 1
    Aug 26 14:07:56 Igor-Home kernel: [ 367.799895] iwlagn 0000:03:00.0: Aggregation not enabled for tid 0 because load = 0
    Aug 26 14:08:41 Igor-Home kernel: [ 413.245620] iwlagn 0000:03:00.0: Aggregation not enabled for tid 0 because load = 0
    Aug 26 14:10:59 Igor-Home kernel: [ 550.827063] iwlagn 0000:03:00.0: Aggregation not enabled for tid 0 because load = 0
    Aug 26 14:16:37 Igor-Home kernel: [ 887.984073] iwlagn 0000:03:00.0: iwlagn_tx_agg_start on ra = 84:c9:b2:49:55:9d tid = 0
    Aug 26 14:19:30 Igor-Home kernel: [ 1060.306074] iwlagn 0000:03:00.0: Aggregation not enabled for tid 0 because load = 0
    Aug 26 14:22:48 Igor-Home kernel: [ 1257.936197] iwlagn 0000:03:00.0: Aggregation not enabled for tid 0 because load = 0
    Aug 26 14:25:18 Igor-Home kernel: [ 1407.916947] iwlagn 0000:03:00.0: Aggregation not enabled for tid 0 because load = 0
    Aug 26 14:31:48 Igor-Home kernel: [ 1796.713849] iwlagn 0000:03:00.0: Aggregation not enabled for tid 0 because load = 0
    Aug 26 14:34:45 Igor-Home kernel: [ 1973.279503] iwlagn 0000:03:00.0: Aggregation not enabled for tid 0 because load = 0
    Aug 26 14:35:53 Igor-Home kernel: [ 2041.481879] iwlagn 0000:03:00.0: Aggregation not enabled for tid 0 because load = 0
    Aug 26 14:38:02 Igor-Home kernel: [ 2169.723218] iwlagn 0000:03:00.0: Aggregation not enabled for tid 0 because load = 1
    Aug 26 14:45:54 Igor-Home kernel: [ 2641.575445] iwlagn 0000:03:00.0: Aggregation not enabled for tid 0 because load = 0
    Aug 26 14:51:18 Igor-Home kernel: [ 2964.068080] iwlagn 0000:03:00.0: Aggregation not enabled for tid 0 because load = 2
    Aug 26 15:05:37 Igor-Home kernel: [ 3820.925654] iwlagn 0000:03:00.0: Aggregation not enabled for tid 0 because load = 1
    Aug 26 15:12:50 Igor-Home kernel: [ 4253.483857] iwlagn 0000:03:00.0: Aggregation not enabled for tid 0 because load = 0
    Aug 26 15:15:57 Igor-Home kernel: [ 4440.025743] iwlagn 0000:03:00.0: Aggregation not enabled for tid 0 because load = 0
    Aug 26 15:18:28 Igor-Home kernel: [ 4591.024983] iwlagn 0000:03:00.0: Aggregation not enabled for tid 0 because load = 0
    Aug 26 15:23:11 Igor-Home kernel: [ 4873.182426] iwlagn 0000:03:00.0: Aggregation not enabled for tid 0 because load = 2
    Aug 26 15:29:05 Igor-Home kernel: [ 5225.756546] iwlagn 0000:03:00.0: Aggregation not enabled for tid 0 because load = 0
    Aug 26 15:31:49 Igor-Home kernel: [ 5390.096634] iwlagn 0000:03:00.0: Aggregation not enabled for tid 0 because load = 1
    Aug 26 15:36:06 Igor-Home kernel: [ 5646.766321] iwlagn 0000:03:00.0: Aggregation not enabled for tid 0 because load = 2
    Aug 26 15:37:17 Igor-Home kernel: [ 5717.409054] iwlagn 0000:03:00.0: Aggregation not enabled for tid 0 because load = 0
    Aug 26 15:41:28 Igor-Home kernel: [ 5967.637480] iwlagn 0000:03:00.0: Aggregation not enabled for tid 0 because load = 0
    Aug 26 15:44:11 Igor-Home kernel: [ 6130.459235] iwlagn 0000:03:00.0: Aggregation not enabled for tid 0 because load = 0
    Aug 26 15:47:00 Igor-Home kernel: [ 6298.522120] iwlagn 0000:03:00.0: Aggregation not enabled for tid 0 because load = 0
    Last edited by igor1982 (2011-08-26 14:38:19)

    I just guess that it is a problem associated with unnormal shutdown procedure.
    Well... if my guess is correct, then there are other reasons you may see that message.
    Most Linux filesystems have a maximum mount count and/or a maximum date interval configured inside them. When you exceed the maximum(s), they'll start complaining; trying to tell us humans that we should run fsck on the filesystem.
    If you want to display the maximums on that ext4 filesystem, then run tune2fs -l /dev/sdXY with root authority, where "X" and "Y" match the filesystem in question. Look for Maximum mount count: and Check interval:. You can use that same tune2fs command to change those values, but read the man page first.
    Since I've never run a reiser filesystem, I don't know how to display it's values, but if you look at the "-c" and "-m" parameters in the reiserfstune man page, it will show you how to change those maximums.
    Now... since most filesystems don't like to be fsck'd while mounted (at least in read/write mode), I'd highly suggest that you become familiar with "The sixth field" portion of the fstab man page. If you set your fstab file correctly, it will "Do The Right Thing" when it comes time to scan your filesystems during a boot.
    (Edited to add:)
    Oh...forgot to mention... those kernel logs entries are from your Intel wireless adapter device driver, not your filesystems
    Last edited by pigiron (2011-08-26 17:43:54)

  • The "Logout" button does not appear my Sakai sessions. I have to mouse over the blank area to find the link to logout. The Logout button does appear in Safari. Firefox version 10.0. Mac OS 10.7.2

    The "Logout" button does not appear my Sakai sessions. I have to mouse over the blank area to find the link to logout. The Logout button does appear in Safari. Firefox version 10.0. Mac OS 10.7.2

    To make sure that all required media is in the library contain the project you need to "Consolidate media"  see:  http://help.apple.com/imovie/mac/10.0/#mov882dee351
    You can then copy the library to your laptop with the Finder and open it with the same version of iMovie.  The last phrase is important since Apple has made changes in the project format several times even between minor updates of iMovie 10.  Earlier versions may not be able to read the library and later versions (not possible in this case) may cause the project to be updated so that it is no longer readable by the desktop version.   iMovie on the laptop will not at first find the copied library - you will have either to double click on it or File - Open library - Other and navigate to its location.
    Geoff.

  • Could not clean working directory during the Livecycle ES 8.2.1 installation

    Hi,
    while using Adobe Livecycle Configuration Manager, I am getting the following error. How could I get around this problem?
    Failed on    'Executing merge scripts for adobe-livecycle-native-weblogic-x86_win32.ear'
    Could not clean working directory
    reg,
    Raj

    I'm posting my experiences with this error because it can hopefully help other people.  I was getting the same error during the Configure LiveCycle ES (1 of 3) screen after clicking the Configure button.  It would fail around 75% with a prompt saying error code ALC-LCM-000-000 and could not clean working directory.  The progress log would indicate the failure at executing merge scripts for adobe-livecycle-native-weblogic-x86_win32.ear.  I was installing LC ES 8.2.1 SP3 + QF 3.19 with the Forms and Output components on a Windows 2003 platform.
    Rename working dir (LC support suggestion)
    Rename c:\adobe\livecycle8.2\configurationManager\working to be like working.old.  Re-run ConfigurationManager (CM).  Didn't help since I saw the same error message.
    Clicked Configure again on and it went 100% without any problems.  The rest of the install went just fine.
    Reinstall Adobe LC ES 8.2
    I was getting the same type of error on another box, but clicking Configure wasn't getting past 75% again.
    Reboot of the box didn't help on the next attempt.
    Reinstall LC ES 8.2 worked
    Used the Add/Remove programs to remove Adobe LC ES 8.2
    Reinstalled the software with SP3 + QF 3.19.
    Ran CM again and I was able to get past this error.

  • Export dumps failing with ORA-31623: a job is not attached to this session

    Hi,
    Oracle Version : 11.1.0.7.0
    OS Solaris : 10
    Export partition dumps failing with the Error message ORA-31623: a job is not attached to this session via the specified handle.
    When i checked the table dba_datapump_jobs, i found several table is in "NOT RUNNING" status. Please help me what to do this?
    OWNER_NAME JOB_NAME OPERATION JOB_MODE STATE DEGREE ATTACHED_SESSIONS DATAPUMP_SESSIONS
    OAM BIN$wzWztSbKbFjgRAAUT9ZhNg==$0 EXPORT TABLE NOT RUNNING 0 0 0
    OAM BIN$wzWztSbZbFjgRAAUT9ZhNg==$0 EXPORT TABLE NOT RUNNING 0 0 0
    OAM BIN$wzWztScGbFjgRAAUT9ZhNg==$0 EXPORT TABLE NOT RUNNING 0 0 0
    OAM BIN$wzWztSaxbFjgRAAUT9ZhNg==$0 EXPORT TABLE NOT RUNNING 0 0 0
    OAM BIN$wzWztSbAbFjgRAAUT9ZhNg==$0 EXPORT TABLE NOT RUNNING 0 0 0
    OAM BIN$wzWztSbPbFjgRAAUT9ZhNg==$0 EXPORT TABLE NOT RUNNING 0 0 0
    OAM BIN$wzWztSbobFjgRAAUT9ZhNg==$0 EXPORT TABLE NOT RUNNING 0 0 0
    OAM BIN$wzWztSb3bFjgRAAUT9ZhNg==$0 EXPORT TABLE NOT RUNNING 0 0 0
    OAM BIN$wzWztSanbFjgRAAUT9ZhNg==$0 EXPORT TABLE NOT RUNNING 0 0 0
    OAM BIN$wzWztSb8bFjgRAAUT9ZhNg==$0 EXPORT TABLE NOT RUNNING 0 0 0
    OAM BIN$wzWztSa2bFjgRAAUT9ZhNg==$0 EXPORT TABLE NOT RUNNING 0 0 0
    OWNER_NAME JOB_NAME OPERATION JOB_MODE STATE DEGREE ATTACHED_SESSIONS DATAPUMP_SESSIONS
    OAM BIN$wzWztSbtbFjgRAAUT9ZhNg==$0 EXPORT TABLE NOT RUNNING 0 0 0
    OAM BIN$wzWztSbFbFjgRAAUT9ZhNg==$0 EXPORT TABLE NOT RUNNING 0 0 0
    OAM BIN$wzWztSbybFjgRAAUT9ZhNg==$0 EXPORT TABLE NOT RUNNING 0 0 0
    OAM BIN$wzWztScLbFjgRAAUT9ZhNg==$0 EXPORT TABLE NOT RUNNING 0 0 0
    OAM BIN$wzWztSasbFjgRAAUT9ZhNg==$0 EXPORT TABLE NOT RUNNING 0 0 0
    OAM BIN$wzWztSbUbFjgRAAUT9ZhNg==$0 EXPORT TABLE NOT RUNNING 0 0 0
    OAM BIN$wzWztScBbFjgRAAUT9ZhNg==$0 EXPORT TABLE NOT RUNNING 0 0 0
    OAM BIN$wzWztSa7bFjgRAAUT9ZhNg==$0 EXPORT TABLE NOT RUNNING 0 0 0
    OAM BIN$wzWztSbebFjgRAAUT9ZhNg==$0 EXPORT TABLE NOT RUNNING 0 0 0
    OAM BIN$wzWztSbjbFjgRAAUT9ZhNg==$0 EXPORT TABLE NOT RUNNING 0 0 0
    21 rows selected
    Regards,
    Deeban

    Hi,
    I read this is some site to stop or kill the data pump jobs. But i wants to know anyone tried this and whether this is recommendable to do or not?
    we can now stop and kill the job:
    SET serveroutput on
    SET lines 100
    DECLARE
    h1 NUMBER;
    BEGIN
    -- Format: DBMS_DATAPUMP.ATTACH('[job_name]','[owner_name]');
    h1 := DBMS_DATAPUMP.ATTACH('SYS_IMPORT_SCHEMA_01','SCHEMA_USER');
    DBMS_DATAPUMP.STOP_JOB (h1,1,0);
    END;
    Regards,
    Deeban

  • Failed to lazily initialize a collection -, could not initialize proxy - no Session

    I have an application that i am extending to provide a REST API.  Everything works fine in the main site, but I am getting the following in the exception log when I try to hit the REST API:
        "Error","ajp-bio-8014-exec-3","12/02/14","12:54:06","table","failed to lazily initialize a collection of role: field, could not initialize proxy - no Session The specific sequence of files included or processed is: service.cfc'' "
        org.hibernate.LazyInitializationException: failed to lazily initialize a collection of role: field, could not initialize proxy - no Session
            at org.hibernate.collection.internal.AbstractPersistentCollection.throwLazyInitializationExc eption(AbstractPersistentCollection.java:566)
            at org.hibernate.collection.internal.AbstractPersistentCollection.withTemporarySessionIfNeed ed(AbstractPersistentCollection.java:186)
            at org.hibernate.collection.internal.AbstractPersistentCollection.readSize(AbstractPersisten tCollection.java:137)
            at org.hibernate.collection.internal.PersistentBag.size(PersistentBag.java:242)
            at coldfusion.runtime.xml.ListIndexAccessor.getSize(ListIndexAccessor.java:44)
            at coldfusion.runtime.xml.ArrayHandler.serialize(ArrayHandler.java:69)
            at coldfusion.runtime.xml.CFComponentHandler.serialize(CFComponentHandler.java:106)
            at coldfusion.runtime.XMLizerUtils.serializeXML(XMLizerUtils.java:83)
            at coldfusion.rest.provider.CFObjectProvider.writeTo(CFObjectProvider.java:378)
            at com.sun.jersey.spi.container.ContainerResponse.write(ContainerResponse.java:306)
            at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationIm pl.java:1479)
            at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImp l.java:1391)
            at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImp l.java:1381)
            at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416)
            at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:538)
            at coldfusion.rest.servlet.CFRestServletContainer.service(CFRestServletContainer.java:141)
            at coldfusion.rest.servlet.CFRestServletContainer.service(CFRestServletContainer.java:86)
            at coldfusion.rest.servlet.CFRestServlet.serviceUsingAlreadyInitializedContainers(CFRestServ let.java:556)
            at coldfusion.rest.servlet.CFRestServlet.invoke(CFRestServlet.java:434)
            at coldfusion.rest.servlet.RestFilter.invoke(RestFilter.java:58)
            at coldfusion.filter.ExceptionFilter.invoke(ExceptionFilter.java:94)
            at coldfusion.filter.ClientScopePersistenceFilter.invoke(ClientScopePersistenceFilter.java:2 8)
            at coldfusion.filter.GlobalsFilter.invoke(GlobalsFilter.java:38)
            at coldfusion.filter.DatasourceFilter.invoke(DatasourceFilter.java:22)
            at coldfusion.rest.servlet.CFRestServlet.invoke(CFRestServlet.java:409)
            at coldfusion.rest.servlet.CFRestServlet.service(CFRestServlet.java:400)
            at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
            at coldfusion.bootstrap.BootstrapServlet.service(BootstrapServlet.java:89)
            at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.j ava:303)
            at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
            at coldfusion.monitor.event.MonitoringServletFilter.doFilter(MonitoringServletFilter.java:42 )
            at coldfusion.bootstrap.BootstrapFilter.doFilter(BootstrapFilter.java:46)
            at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.j ava:241)
            at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
            at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:220)
            at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:122)
            at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:501)
            at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:170)
            at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:98)
            at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:116)
            at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:422)
            at org.apache.coyote.ajp.AjpProcessor.process(AjpProcessor.java:198)
            at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.jav a:607)
            at org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:313)
            at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
            at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
            at java.lang.Thread.run(Unknown Source)
    Disabling lazy loading will fix this, but results in unacceptable performance (load times go from 200ms to 22s).  I'm not sure how else to handle this.
    I am new to REST in ColdFusion, and it seems to me that the CFC's are being handled in an unusual way.  They do not appear to be initialized (init method does not seem to run) and now it seems that ORM is not handled the same either.  Am I missing something?
    Here is the excerpt of my code producing this error:
        component rest="true" restpath="item"
            import model.beans.*;
            remote item function getitem( numeric id restargsource="Path" ) restpath="{id}" httpmethod="GET"
                var item = entityLoad("item",{ id = id },true);
                return item;
    And the bean:
        component persistent="true" table="item" output="false" extends="timestampedBean" batchsize="10" cacheuse="read-only"
            /* properties */
            property name="id" column="id" type="numeric" ormtype="int" fieldtype="id" generator="identity";
            property name="title" column="title" type="string" ormtype="string";
            property name="description" column="description" type="string" ormtype="string";
            property name="status" column="status" type="numeric" ormtype="byte" default="0" ;
            property name="user" fieldtype="many-to-one" cfc="user" fkcolumn="userid" inversejoincolum="userid" lazy="true" cacheuse="read-only";
            property name="field" type="array" fieldtype="many-to-many" cfc="field" fkcolumn="id" linktable="items_fields" inversejoincolumn="fieldid" lazy="extra" batchsize="10" cacheuse="read-only";
    I also noticed in the stdout log that Hibernate is logging the query, but then it logs the "No session" error:
        Hibernate:
            select
                item0_.id as id0_0_,
                item0_.dtcreated as dtcreated0_0_,
                item0_.dtmodified as dtmodified0_0_,
                item0_.title as title0_0_,
                item0_.description as descript6_0_0_,
                item0_.status as status0_0_,
                item0_.userid as userid0_0_
            from
                item item0_
            where
                item0_.id=?
        Dec 2, 2014 15:23:00 PM Error [ajp-bio-8014-exec-3] - failed to lazily initialize a collection of role: field, could not initialize proxy - no Session The specific sequence of files included or processed is: service.cfc''
    I should probably also add that this "item" table is part of a many-to-many relationship, so "collection of role: field" is referencing the foreign table.

    I have an application that i am extending to provide a REST API.  Everything works fine in the main site, but I am getting the following in the exception log when I try to hit the REST API:
        "Error","ajp-bio-8014-exec-3","12/02/14","12:54:06","table","failed to lazily initialize a collection of role: field, could not initialize proxy - no Session The specific sequence of files included or processed is: service.cfc'' "
        org.hibernate.LazyInitializationException: failed to lazily initialize a collection of role: field, could not initialize proxy - no Session
            at org.hibernate.collection.internal.AbstractPersistentCollection.throwLazyInitializationExc eption(AbstractPersistentCollection.java:566)
            at org.hibernate.collection.internal.AbstractPersistentCollection.withTemporarySessionIfNeed ed(AbstractPersistentCollection.java:186)
            at org.hibernate.collection.internal.AbstractPersistentCollection.readSize(AbstractPersisten tCollection.java:137)
            at org.hibernate.collection.internal.PersistentBag.size(PersistentBag.java:242)
            at coldfusion.runtime.xml.ListIndexAccessor.getSize(ListIndexAccessor.java:44)
            at coldfusion.runtime.xml.ArrayHandler.serialize(ArrayHandler.java:69)
            at coldfusion.runtime.xml.CFComponentHandler.serialize(CFComponentHandler.java:106)
            at coldfusion.runtime.XMLizerUtils.serializeXML(XMLizerUtils.java:83)
            at coldfusion.rest.provider.CFObjectProvider.writeTo(CFObjectProvider.java:378)
            at com.sun.jersey.spi.container.ContainerResponse.write(ContainerResponse.java:306)
            at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationIm pl.java:1479)
            at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImp l.java:1391)
            at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImp l.java:1381)
            at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416)
            at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:538)
            at coldfusion.rest.servlet.CFRestServletContainer.service(CFRestServletContainer.java:141)
            at coldfusion.rest.servlet.CFRestServletContainer.service(CFRestServletContainer.java:86)
            at coldfusion.rest.servlet.CFRestServlet.serviceUsingAlreadyInitializedContainers(CFRestServ let.java:556)
            at coldfusion.rest.servlet.CFRestServlet.invoke(CFRestServlet.java:434)
            at coldfusion.rest.servlet.RestFilter.invoke(RestFilter.java:58)
            at coldfusion.filter.ExceptionFilter.invoke(ExceptionFilter.java:94)
            at coldfusion.filter.ClientScopePersistenceFilter.invoke(ClientScopePersistenceFilter.java:2 8)
            at coldfusion.filter.GlobalsFilter.invoke(GlobalsFilter.java:38)
            at coldfusion.filter.DatasourceFilter.invoke(DatasourceFilter.java:22)
            at coldfusion.rest.servlet.CFRestServlet.invoke(CFRestServlet.java:409)
            at coldfusion.rest.servlet.CFRestServlet.service(CFRestServlet.java:400)
            at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
            at coldfusion.bootstrap.BootstrapServlet.service(BootstrapServlet.java:89)
            at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.j ava:303)
            at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
            at coldfusion.monitor.event.MonitoringServletFilter.doFilter(MonitoringServletFilter.java:42 )
            at coldfusion.bootstrap.BootstrapFilter.doFilter(BootstrapFilter.java:46)
            at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.j ava:241)
            at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
            at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:220)
            at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:122)
            at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:501)
            at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:170)
            at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:98)
            at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:116)
            at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:422)
            at org.apache.coyote.ajp.AjpProcessor.process(AjpProcessor.java:198)
            at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.jav a:607)
            at org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:313)
            at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
            at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
            at java.lang.Thread.run(Unknown Source)
    Disabling lazy loading will fix this, but results in unacceptable performance (load times go from 200ms to 22s).  I'm not sure how else to handle this.
    I am new to REST in ColdFusion, and it seems to me that the CFC's are being handled in an unusual way.  They do not appear to be initialized (init method does not seem to run) and now it seems that ORM is not handled the same either.  Am I missing something?
    Here is the excerpt of my code producing this error:
        component rest="true" restpath="item"
            import model.beans.*;
            remote item function getitem( numeric id restargsource="Path" ) restpath="{id}" httpmethod="GET"
                var item = entityLoad("item",{ id = id },true);
                return item;
    And the bean:
        component persistent="true" table="item" output="false" extends="timestampedBean" batchsize="10" cacheuse="read-only"
            /* properties */
            property name="id" column="id" type="numeric" ormtype="int" fieldtype="id" generator="identity";
            property name="title" column="title" type="string" ormtype="string";
            property name="description" column="description" type="string" ormtype="string";
            property name="status" column="status" type="numeric" ormtype="byte" default="0" ;
            property name="user" fieldtype="many-to-one" cfc="user" fkcolumn="userid" inversejoincolum="userid" lazy="true" cacheuse="read-only";
            property name="field" type="array" fieldtype="many-to-many" cfc="field" fkcolumn="id" linktable="items_fields" inversejoincolumn="fieldid" lazy="extra" batchsize="10" cacheuse="read-only";
    I also noticed in the stdout log that Hibernate is logging the query, but then it logs the "No session" error:
        Hibernate:
            select
                item0_.id as id0_0_,
                item0_.dtcreated as dtcreated0_0_,
                item0_.dtmodified as dtmodified0_0_,
                item0_.title as title0_0_,
                item0_.description as descript6_0_0_,
                item0_.status as status0_0_,
                item0_.userid as userid0_0_
            from
                item item0_
            where
                item0_.id=?
        Dec 2, 2014 15:23:00 PM Error [ajp-bio-8014-exec-3] - failed to lazily initialize a collection of role: field, could not initialize proxy - no Session The specific sequence of files included or processed is: service.cfc''
    I should probably also add that this "item" table is part of a many-to-many relationship, so "collection of role: field" is referencing the foreign table.

Maybe you are looking for

  • Connect new MacPro to 27" monitor with 2560x1440 resolution

    Hello community, I'm looking for help connecting my new Mac Pro (late-2013, 3 GHz 8-core, 64 gb ram, currently running OS X 10.9.5) to a 27" Monoprice IPS-ZERO-G monitor.  The monitor only has VGA and Dual-Link DVI inputs, and is supposed to support

  • No data found for Report

    Hi, I am new to Apex so please be clear with your answers.  We use a report based on a view when I access the report in the application I receive "no  data found" error.  If test the query in SQL commands it works fine.  If I check the view in object

  • Amp; showing up in all my links

    I have tried to stay faithful to Macromedia and have worked hard to be able to afford each of the upgrades. So I invested the $400 in the new Dreamweaver 8 studio and I have had nothing but headaches ever since with the Dreamweaver 8. All of the link

  • Error in start up

    Hello, After installing the Netweaver Developer studio , while startup i am facing an error as <b>"PROBLEMS DURING STARTUP . CHECK THE .LOG FILE IN THE .METADATA DIRECTORY OF YOUR WORKSPACE."</b> Can anyone help me in solving  this? Thanks Regards, D

  • BDC for Inspection plan

    Dear QM Gurus, I have recorded BDC for QP01. At the time of recording system was not asking for "Inspection Method". Also we have not created inspection method for any material. But after completion of recording, abaper has done his job to run that B