Use of setScalableModeOn() in DataProcessor results NullPointerException

Hi,
we are working with the BI Publisher API Version 5.6.2. Because we have giant data templates we receive everytime when we call the report an OutOfMemory exception in the DataProcessor. Therefore I set setScalableModeOn before the processData() method. Now I get a NullPointerException when I start processData(). The hole message is:
Exception in thread "main" java.lang.NullPointerException
at oracle.apps.xdo.dataengine.ScalableStringList.cleanup(Unknown Source)
at oracle.apps.xdo.dataengine.XMLPGEN.processXML(Unknown Source)
at oracle.apps.xdo.dataengine.XMLPGEN.writeXML(Unknown Source)
at oracle.apps.xdo.dataengine.DataProcessor.processData(Unknown Source)
There is no detailed documentation about this method so it would be very nice if someone could helb me to solve this problem.

Could someone please help me how to find this problem. Any hint yould help me. I already set trace on but this didn't help to get some more information. I checked privileges on file system and in database but I can not find any problem so if anyone has an idea please write me.

Similar Messages

  • When using Quite Imposing Plus the resulting file is only given a Temp name which will be sent as the document name.

    Our company has recently upgraded form Acrobat Professional 8 to Acrobat professional XI. We also use the Quite Imposing Plus plug-in in both versions. Now when using Quite Imposing Plus the resulting file is only given a Temp name which will be sent as the document name to the printer unless the imposed document is first saved. This was not the case in Acrobat Professional 8. I have attached a screen shots that shows an example.
    Is there any reason why this is happening and how can we stop it.

    Thank you for your advice although I think you may be incorrect as the naming of the documents made with Quite Imposing is correct in Acrobat 8.
    After reading the comment below it would seem something has changed from version nine onwards.
    Going forward we plan on changing the way we save and send our imposed documents to print but thank you for your time.

  • When "Use Google Instant", "Open search results in a new browser window." does not function.

    In the "Google Search Setting", if select "Do not use Google Instant", then the selected search result of Google Search will be displayed in a new window, but if "Use Google Instant", the selected result will replace Google Search window. This problem does not exist when using Chrome Browser.

    That does what it says - even if Google tells the browser to open the page in a new window, you have told Firefox to ignore that and open it in a new tab. Uncheck the box and you should get the result you want.

  • How do I use FILE_GET_NAME and make my resulting dataset name unique?

    Okay, here's a case where I have a bunch of pieces to the puzzle -- a little knowledge here, a little knowledge there -- but I'm having trouble putting them together.
    I am working on an RFC that is called by XI as part of an interface.  This interface will execute every 15 minutes.  As part of the RFC's execution (which is very simple and straight-forward) I would like to write out a dataset of the processing results.  I have already learned how to use the OPEN DATASET, TRANSFER, and CLOSE DATASET commands, so I'm good to go there.
    Here's what I'd like to do:  Because this can run every 15 minutes, I don't want to keep overwriting my dataset file with the latest version.  I'd like to keep the dataset name unique so it doesn't happen.  Obviously, the first thought that comes to mind is adding a date/time stamp to the file name, but I'm not sure how -- or the best way -- to do this.
    Also, I was told I should put the file -- for now -- into the DIR_DATA directory.  I had no idea what this meant until I was told about t-code "FILE" and that this was the logical file name.  Someone in-house thought I'd need to use a function called FILE_GET_NAME to make things easier.
    Okay, so I need to use FILE_GET_NAME apparently, somehow plugging in DIR_DATA as the directory I need, and I want the resulting file name to have the date/time added at run time.  I'm thinking when it comes to batch processing and writing out datasets, this has to be something that someone's already "paved the road" doing.  Has anyone done this?  Do you have a little slice of code doing just this that you could post?  This would go a long way toward helping me understand how this "fits" together, and I would greatly appreciate any help you can provide.
    As always, points awarded to all helpful answers.  Thank you so much!

    hey,
    here is the brief description of logical & physical path.
    in the physical path, we will give total path of the file,where the file is located actually in the server.
    for example : /INT/D01/IN/MYFILE.
    this is the physical path in my client for a particular file.
    some times this have problems like D01 above in the path,
    is development system. if we move to quality, it will be Q01 etc..
    to make every file independent of the server location, we use logical path concept, which is nothing but, instead of giving the total physical path like above,we will give this logical path & file name. before that we will create a logical path in sap & assign some physical path to it.
    the below function module is used to get the actual physical path by giving the logical path name & file name
    *&      Form  GET_PHYSICAL_PATH
          text This form used to get the Physical Filepath by giving the Logical path & the File name.
    FORM GET_PHYSICAL_PATH.
      DATA : LV_FILE(132) TYPE C,
             V_LENGTH TYPE I   ,
             LV_LOGNAME LIKE FILEPATH-PATHINTERN.
      LV_LOGNAME = P_LPATH.
    *--this P_LPATH is a parameter in the selection screen
    *--this P_FNAME is the actual file name as below
    *--PARAMETERS : P_LPATH TYPE RLGRAP-FILENAME,
                    P_fname TYPE RLGRAP-FILENAME.
      CALL FUNCTION 'FILE_GET_NAME_USING_PATH'
           EXPORTING
                CLIENT                     = SY-MANDT
                LOGICAL_PATH               = LV_LOGNAME
                OPERATING_SYSTEM           = SY-OPSYS
                FILE_NAME                  = p_fname
           IMPORTING
                FILE_NAME_WITH_PATH        = LV_FILE
           EXCEPTIONS
                PATH_NOT_FOUND             = 1
                MISSING_PARAMETER          = 2
                OPERATING_SYSTEM_NOT_FOUND = 3
                FILE_SYSTEM_NOT_FOUND      = 4
                OTHERS                     = 5.
      IF SY-SUBRC <> 0.
        MESSAGE E000 WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
      ELSE.
    *--ur total physical(absolute) path will be in LV_FILE.
        V_FILEPATH = LV_FILE.
      ENDIF.
    ENDFORM.                    " GET_PHYSICAL_PATH
    unique naming for ur file names;
    after getting the physical path from the above function module, append date& time stamp to the file as below.
    CONCATENATE V_FILEPATH
                SY-DATUM
                SY-UZEIT
                INTO V_FILEPATH.
    This way you can make your file name unique always
    regards
    srikanth
    Message was edited by: Srikanth Kidambi

  • Using SUBMIT and getting the results back

    Hi,
    I need to call a BAPI on-line (not in the background) using a different User ID as the one that's logged in.  I read from the threads the use of SUBMIT ...VIA JOB. 
    CALL FUNTION 'OPEN JOB'...
    SUBMIT zsubmitted_program
           VIA JOB     l_jobname
               NUMBER  l_jobcount
               USER    i_user       ====> changed User ID
           TO SAP-SPOOL WITHOUT SPOOL DYNPRO
               SPOOL PARAMETERS ls_params
                  AND RETURN.
    CALL FUNTION 'JOB_CLOSE'...
    Questions:
    1. How would I know that the job is finished?
    2. How do I retrieve the messages returned by the BAPI so I can present it on the screen?
    Many thanks,
    Huntr

    Hi,
    I will be wrapping BAPI_MATERIAL_SAVEDATA into a Z program to run it and use the method mentioned above.  I need to know by that method when the batch job is done and how to retrieve the results.
    Is there a function module to retrieve the job status or log?
    Also I think if i write the bapi results in the spool, I should be able to retrieve the spool.  What is the function module to do so?
    Regards,
    Huntr

  • How to send email using pl/sql containing the result set as the msg body

    Hi.. im using Pl/SQL code to send emails to the users from a dataset that is obtained in a databse table. i have used utl_smtp commands to establish the connection with the smtp mail server. im stuck at the logic when i have to include the message body which is actually the result set of a table.. For instance
    IF (p_message = 0) THEN
    UTL_SMTP.write_data(l_mail_conn, 'There is no mismatch between the codes' || UTL_TCP.crlf);
    ELSE
    UTL_SMTP.write_data(l_mail_conn, 'The missing codes are ' || UTL_TCP.crlf);
    for s in (select * from temp)
    loop
    UTL_SMTP.write_data(l_mail_conn, ' ' ||s || UTL_TCP.crlf);
    end loop;
    END IF;
    UTL_SMTP.close_data(l_mail_conn);
    UTL_SMTP.quit(l_mail_conn);
    END;
    ***p_message is a prameter passed to this procedure..
    Can i obtain the result in the form i have it in my table. which has three columns. I want to display the three columns as it is with teh records. ?

    this is not related about this forum but you can use below,
    CREATE OR REPLACE PROCEDURE SEND_MAIL (subject varchar2,mail_from varchar2, mail_to varchar2,mail_msg varchar2)
    IS
    mail_host varchar2(30):='XXXXX';
    mail_conn utl_smtp.connection;
    tz_offset number:=0;
    str varchar2(32000);
    BEGIN
    begin
    select to_number(replace(dbtimezone,':00'))/24 into tz_offset from dual;
    exception
    when others then
    null;
    end;
    mail_conn:=utl_smtp.open_connection(mail_host, 25);
    utl_smtp.helo(mail_conn,mail_host);
    utl_smtp.mail(mail_conn,'[email protected]');
    utl_smtp.rcpt(mail_conn,mail_to);
    utl_smtp.open_data(mail_conn);
    utl_smtp.write_data(mail_conn,'Date: '||to_char(sysdate-tz_offset,'dd mon yy hh24:mi:ss')||utl_tcp.crlf);
    utl_smtp.write_data(mail_conn,'From: '|| mail_from ||utl_tcp.crlf);
    utl_smtp.write_data(mail_conn,'To: "'|| mail_to ||'" <'||mail_to||'>'||utl_tcp.crlf);
    utl_smtp.write_data(mail_conn,'Subject: '||subject||utl_tcp.crlf);
    utl_smtp.write_data(mail_conn,utl_tcp.crlf);
    utl_smtp.write_data(mail_conn,replace_turkish_chars(mail_msg)||utl_tcp.crlf);
    utl_smtp.write_data(mail_conn,utl_tcp.crlf);
    utl_smtp.close_data(mail_conn);
    utl_smtp.quit(mail_conn);
    END;
    Edited by: odilibrary.com on Jun 12, 2012 5:26 PM

  • Using variables in the SQL Results in Dashboard Prompt

    I use the ff query to limit my results on my dashboard prompt. The variable value is given by a dashboard prompt within the same page.
    SELECT Owner.Name saw_0 FROM "iSupport Service Request" WHERE "- Service Request Attributes".Platform =
    '@{promptedPlatform}' ORDER BY saw_0
    It works fine with this query, only when selecting a particular selection in the prompt that feeds the variable data. Is there anyway to have this particular prompt by default(when the page first loads, load all the values possible prior to filtering by the variable?

    Hi Harley ,
    The frame labels are 1,2,3 etc
    sym.setVariable("subselect" , 0); // in the stage composition ready
    Then in the menu you can set the "subselect" variable to 1,2,3 and then call sym.play(sym.getVariable("subselect"));
    Thanks and Regards,
    Sudeshna Sarkar

  • Error using Workflow API to retrieve result of closed instance.

    Hello,
    I have been attempting to use the workflow Java API to retrieve the payloads of open and completed instances of a particular BPEL Process. I have been able to use the IInstanceHandle.getField method to retrieve the payload of an active/open instance but I am having difficulty with the IInstancehandle.getResult method to retrieve the result of a closed.completed instance. The error I receive is: "Scope not found. The scope "BpPrc0.1" has not been defined in the current instance. I have checked out the audit history for this instance and have seen this particular scope scattered throughout the XML. Any ideas on what may be causing this issue or something I should be looking for?
    Thanks in advance.

    Hi,
    Use the getField() method:
    Object field = handle.getField("outputVariable");
    This returns a HashMap with the payload as one of its entries.

  • How to use stored procedure which returns result set in OBIEE

    Hi,
    I hav one stored procedure (one parameter) which returns a result set. Can we use this stored procedure in OBIEE? If so, how we hav to use.
    I know we hav the Evaluate function but not sure whether I can use for my SP which returns result set. Is there any other way where I can use my SP?
    Pls help me in solving this.
    Thanks

    Hi Radha,
    If you want to cache the results in the Oracle BI Server, you should check that option. When you run a query the Oracle BI Server will get its results from the cache, based on the persistence time you define. If the cache is expired, the Oracle BI Server will go to the database to get the results.
    If you want to use caching, you should enable caching in the nqsconfig.ini file.
    Cheers,
    Daan Bakboord

  • When using Google to search the results are displayed so small I cannot read them. This just started today.

    Today when I was searching with Google. The display of the results started getting smaller and smaller, right before my eyes, and I could not stop it, and I was doing nothing, but reading the screen. I cannot find anything to tell me how to get the larger size display back. All the other search engines, such as Bing, Yahoo are displaying correctly. I have always used Google, I like Google, I do not want to change. Please can you help me? I have a screen print available is you need to see it. I tried searching the Google website for help but I couldn't find anything. I installed something called Stylish, thinking that would help, but it did not.

    hello, try to press ctrl+0 ('zero') while you're on the page in order to reset the zoom level or in case this doesn't work also go through the other common troubleshooting steps at [[Websites look wrong or appear differently than they should]].

  • Shouldn't using WITH return the same results as if you'd put the results in a table first?

    First off, here's my version info:
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
    PL/SQL Release 11.1.0.7.0 - Production
    CORE 11.1.0.7.0 Production
    TNS for HPUX: Version 11.1.0.7.0 - Production
    NLSRTL Version 11.1.0.7.0 - Production
    I just reread the documentation again on the subquery factoring clause of SELECT statement, and I didn't see any restrictions that would apply.
    Can someone help me understand why I'm getting different results?  I'd like to be able to use the statement that creates MAT3, but for some reason it doesn't work.  However, when I break it up and store the last TMP subquery in a table (MAT1), I'm able to get the expected results in MAT2.
    Sorry if the example seems a little esoteric.  I was trying to put something together to help illustrate another problem, so it was convenient to use the same statements to illustrate this problem.
    drop table mat1;
    create table mat1 as
    with skus as (
      select level as sku_id
      from dual
      connect by level <= 1000
      tran_dates as (
      select to_date('20130731', 'yyyymmdd') + level as tran_date
      from dual
      connect by level <= 31
      sku_dates as (
      select s.sku_id,
      t.tran_date,
      case when dbms_random.value * 5 < 4
      then 0
      else 1
      end as has_changes,
      round(dbms_random.value * 10000, 2) as unit_cost
      from skus s
      inner join tran_dates t
      on 1 = 1
    select d.sku_id,
      d.tran_date,
      d.unit_cost
      from sku_dates d
      where d.has_changes = 1
    drop table mat2;
    create table mat2 as
    select m.sku_id,
      m.tran_date,
      m.unit_cost,
      min(m.tran_date) over (partition by m.sku_id order by m.tran_date rows between 1 following and 1 following) as next_tran_date
      from mat1 m
    drop table mat3;
    create table mat3 as
    with skus as (
      select level as sku_id
      from dual
      connect by level <= 1000
      tran_dates as (
      select to_date('20130731', 'yyyymmdd') + level as tran_date
      from dual
      connect by level <= 31
      sku_dates as (
      select s.sku_id,
      t.tran_date,
      case when dbms_random.value * 5 < 4
      then 0
      else 1
      end as has_changes,
      round(dbms_random.value * 10000, 2) as unit_cost
      from skus s
      inner join tran_dates t
      on 1 = 1
      tmp as (
      select d.sku_id,
      d.tran_date,
      d.unit_cost
      from sku_dates d
      where d.has_changes = 1
    select m.sku_id,
      m.tran_date,
      m.unit_cost,
      min(m.tran_date) over (partition by m.sku_id order by m.tran_date rows between 1 following and 1 following) as next_tran_date
      from tmp m
    select count(*) from mat2;
    select count(*) from mat3;
      from tmp m
    select count(*) from mat2;
    select count(*) from mat3;
    select count(*) from mat2;
      COUNT(*)
         31000
    Executed in 0.046 seconds
    select count(*) from mat3;
      COUNT(*)
             0
    Executed in 0.031 seconds

    I think there's something else going on.
    I made the change you suggested, with a slight modification to retain the same functionality of flagging ~80% of the rows as not having changes.  I then copied that section of my script - included below - and pasted it into my session twice.  Unfortunately, I got different results each time.  I have had a number of strange problems when using the WITH clause, which is one of the reasons I jumped at posting something here when I encountered it again in this context.
    Can you help me understand why this would happen?
    drop table mat3;
    create table mat3 as
    with skus as (
      select level as sku_id
      from dual
      connect by level <= 1000
      tran_dates as (
      select to_date('20130731', 'yyyymmdd') + level as tran_date
      from dual
      connect by level <= 31
      sku_dates as (
      select s.sku_id,
      t.tran_date,
      case when dbms_random.value(1,100) * 5 < 400
      then 0
      else 1
      end as has_changes,
      round(dbms_random.value * 10000, 2) as unit_cost
      from skus s
      inner join tran_dates t
      on 1 = 1
      tmp as (
      select d.sku_id,
      d.tran_date,
      d.unit_cost
      from sku_dates d
      where d.has_changes = 1
    select m.sku_id,
      m.tran_date,
      m.unit_cost,
      min(m.tran_date) over (partition by m.sku_id order by m.tran_date rows between 1 following and 1 following) as next_tran_date
      from tmp m
    select count(*) from mat2;
    select count(*) from mat3;
    152249 < mattk > drop table mat3;
    Table dropped
    Executed in 0.016 seconds
    152249 < mattk > create table mat3 as
                             2  with skus as (
                             3   select level as sku_id
                             4   from dual
                             5   connect by level <= 1000
                             6   ),
                             7   tran_dates as (
                             8   select to_date('20130731', 'yyyymmdd') + level as tran_date
                             9   from dual
                            10   connect by level <= 31
                            11   ),
                            12   sku_dates as (
                            13   select s.sku_id,
                            14   t.tran_date,
                            15   case when dbms_random.value(1,100) * 5 < 400
                            16   then 0
                            17   else 1
                            18   end as has_changes,
                            19   round(dbms_random.value * 10000, 2) as unit_cost
                            20   from skus s
                            21   inner join tran_dates t
                            22   on 1 = 1
                            23   ),
                            24   tmp as (
                            25   select d.sku_id,
                            26   d.tran_date,
                            27   d.unit_cost
                            28   from sku_dates d
                            29   where d.has_changes = 1
                            30   )
                            31  select m.sku_id,
                            32   m.tran_date,
                            33   m.unit_cost,
                            34   min(m.tran_date) over (partition by m.sku_id order by m.tran_date rows between 1 following and 1 following) as next_tran_date
                            35   from tmp m
                            36  ;
    Table created
    Executed in 0.53 seconds
    152250 < mattk > select count(*) from mat2;
      COUNT(*)
             0
    Executed in 0.047 seconds
    152250 < mattk > select count(*) from mat3;
      COUNT(*)
         31000
    Executed in 0.047 seconds
    152250 < mattk >
    152251 < mattk > drop table mat3;
    Table dropped
    Executed in 0.016 seconds
    152252 < mattk > create table mat3 as
                             2  with skus as (
                             3   select level as sku_id
                             4   from dual
                             5   connect by level <= 1000
                             6   ),
                             7   tran_dates as (
                             8   select to_date('20130731', 'yyyymmdd') + level as tran_date
                             9   from dual
                            10   connect by level <= 31
                            11   ),
                            12   sku_dates as (
                            13   select s.sku_id,
                            14   t.tran_date,
                            15   case when dbms_random.value(1,100) * 5 < 400
                            16   then 0
                            17   else 1
                            18   end as has_changes,
                            19   round(dbms_random.value * 10000, 2) as unit_cost
                            20   from skus s
                            21   inner join tran_dates t
                            22   on 1 = 1
                            23   ),
                            24   tmp as (
                            25   select d.sku_id,
                            26   d.tran_date,
                            27   d.unit_cost
                            28   from sku_dates d
                            29   where d.has_changes = 1
                            30   )
                            31  select m.sku_id,
                            32   m.tran_date,
                            33   m.unit_cost,
                            34   min(m.tran_date) over (partition by m.sku_id order by m.tran_date rows between 1 following and 1 following) as next_tran_date
                            35   from tmp m
                            36  ;
    Table created
    Executed in 0.078 seconds
    152252 < mattk > select count(*) from mat2;
      COUNT(*)
             0
    Executed in 0.031 seconds
    152252 < mattk > select count(*) from mat3;
      COUNT(*)
             0
    Executed in 0.047 seconds

  • Using SUM BY to limit result of 3 union queries (11g)

    Hey guys,
    First time posting here (btw, I've tried changing my handle name via edit profile here to no success! Ugh!).
    I need to modify a report that's built off of 3 union queries.  Here's a simplified table version of the union result:
    Label
    Year
    Category
    Score
    Ct
    A
    2005
    1a
    4.0
    1
    A
    2005
    2b
    3.5
    1
    A
    2005
    3c
    2.5
    1
    B
    2006
    1a
    2.8
    1
    B
    2006
    2b
    4.0
    1
    Ct is a calculated item where I did a CASE on a field just to get a count of 1 on this column.  I need to limit this result where per label, per year, there are 3 categories.  So, looking at above table, Label A rows will be the only set returned.  To "hopefully" get 3 under the Ct column where there are 3 categories per year, per label, I have tried "SUM(Ct by Year)" but it didn't give the correct result.  I have also tried "SUM(SUM(Ct by Year) by Label)" and that didn't work either (not that I expected it to work).  I also messed around with GROUP BY to no success either.
    Has anyone run into something like this?  Any input/tip is appreciated. THANKS!!
    Twiggy99

    Are you using outer query to get the same?
    btw: What is when you use count instead of sum?

  • Using LIKE operator doesnt return results

    Hi
    I'm very new to SQL and databases in general. I have a vb.NET app in which I'm trying to send a SQL server to a SQL Server 2012 server.
    For testing purposes Im using SQL SErver Management studio. I have one query that works, but when I try to use the exact same query using wildcard in the statement, it returns no results even though I know for a fact that the table contains such values.
    This works:
    SELECT number
    FROM casenumber
    WHERE number = '100510'
    This, for some reason, does NOT work:
    SELECT number
    FROM casenumber
    WHERE number LIKE '100%'
    What am I doing wrong? Help, pretty please? Many thanks in advance!

    When you use WHERE number = '100510' to get your result, it is not right way to do it.
    When you ask Tsql question, you should provide your table DDL (table column and data type...).
    Since your number column is a float numeric type, you don't need to use quotes around it. You cannot use
    a string LIKE search with a number unless you convert the column to string type.
    The final query may give you the result you want but you still don't get the point for this question.
    To operate with the appropriate data type matters a lot in T-SQL. 

  • Using a PDA to store results data from Compact FP?

    Is it possible to use a PDA as a front end display for a Labview RT application running on a Compact FP system? I also want to write the results data from the Compact FP app to the PDA RAM Drive.
    I am just quoting this system but any details as to whats required to do this would be helpful.
    Thanks in advance for your help!
    B Myers

    Yes it is possible. Compact FieldPoint RT controllers have ethernet and serial ports that could be used to communicate with PDA's. Communication through TCP or serial can be done in LabVIEW RT just as it is done in regular LabVIEW. National Instruments currently does not have any tools for PDA's specifically for this, so you would have to take care of the PDA programming and communication yourself. But to the Compact FieldPoint it would just be talking to a serial or ethernet device.
    Regards,
    JR Andrews
    Application Engineer
    National Instruments

  • Monitoring OAQ using mgw_gateway gives an odd result?

    When we submit the following query:
    select  AGENT_STATUS, AGENT_PING, LAST_ERROR_DATE, LAST_ERROR_TIME from mgw_gateway;
    We get this result:
    SQL> select AGENT_STATUS, AGENT_PING, LAST_ERROR_DATE, LAST_ERROR_TIME from mgw_gateway;
    MESSAGING GATEWAY IS NOT ONLINE
    AGENT_STATUS         AGENT_PING           LAST_ERROR_DATE     LAST_ERR
    RUNNING              UNREACHABLE
    SQL> /
    MESSAGING GATEWAY IS NOT ONLINE
    AGENT_STATUS         AGENT_PING           LAST_ERROR_DATE     LAST_ERR
    RUNNING              REACHABLE
    SQL>
    Those two queries were submitted sequentially.
    DB is located on a Redhat Linux server, where we also submit these queries, so there should be no network issues.
    The questions are:
    Why is AGENT_PING showing two different results?
    Why do we get "MESSAGING GATEWAY IS NOT ONLINE" no matter what the status?
    We are up and running, data is being retrieved from MQ into the AQ queue tables, we are processing and not losing any data.
    The "NOT ONLINE" message is not listed in any docs or online sources I have been able to find.
    We are at a loss. How can we diagnose this?

    Hi WoG, thanks for the reply.
    No, not an "own" view, definitely sys.
    Yes, your observation is correct. That "NOT ONLINE" message is separate from the select results and is not a column on the view.
    Oracle version is:
         Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
         With the Partitioning option
    OS:
         Red Hat Enterprise Linux Server release 5.10 (Tikanga)
    View definition as pulled via sqldeveloper:
    CREATE OR REPLACE FORCE VIEW "SYS"."MGW_GATEWAY" ("AGENT_STATUS", "AGENT_PING",
      "AGENT_JOB", "AGENT_USER", "AGENT_DATABASE", "LAST_ERROR_DATE",
      "LAST_ERROR_TIME", "LAST_ERROR_MSG", "MAX_CONNECTIONS", "MAX_MEMORY",
      "MAX_THREADS", "AGENT_INSTANCE", "AGENT_START_TIME", "CONNTYPE", "AGENT_NAME"
      , "SERVICE", "INITFILE", "COMMENTS")
    AS
      SELECT
        -- agent status
        CASE
          WHEN gw.agent_status = 0
          THEN 'NOT_STARTED'
          WHEN gw.agent_status = 1
          THEN 'START_SCHEDULED'
          WHEN gw.agent_status = 2
          THEN 'STARTING'
          WHEN gw.agent_status = 3
          THEN 'INITIALIZING'
          WHEN gw.agent_status = 4
          THEN 'RUNNING'
          WHEN gw.agent_status = 5
          THEN 'SHUTTING_DOWN'
          WHEN gw.agent_status = 6
          THEN 'BROKEN'
        END AGENT_STATUS,
        -- agent ping status; no ping (delay) if agent not yet started
        CASE
          WHEN gw.agent_status >= 0
          AND gw.agent_status  <=1
          THEN ''
          WHEN mgwi_admin.ping(gw.agent_name,3) = 1
          THEN 'REACHABLE'
          ELSE 'UNREACHABLE'
        END AGENT_PING,
        -- queued job used to start gateway agent
        gw.agent_job AGENT_JOB,
        -- agent user and database (connect string)
        gw.agent_user AGENT_USER,
        gw.agent_database AGENT_DATABASE,
        -- info about last gateway agent error
        gw.error_time LAST_ERROR_DATE,
        SUBSTR(TO_CHAR(gw.error_time, 'HH24:MI:SS'), 1, 8) LAST_ERROR_TIME,
        gw.error_message LAST_ERROR_MSG,
        -- misc config information
        gw.max_connections MAX_CONNECTIONS,
        gw.max_memory MAX_MEMORY,
        gw.max_threads MAX_THREADS,
        gw.agent_instance AGENT_INSTANCE,
        gw.agent_start_time AGENT_START_TIME,
        DECODE(bitand(gw.flags, 1), 0, 'JDBC_OCI', 1, 'JDBC_THIN', NULL) CONNTYPE,
        gw.agent_name AGENT_NAME,
        gw.service SERVICE,
        gw.initfile INITFILE,
        gw.comments COMMENTS
      FROM
        mgw$_gateway gw
    WITH READ ONLY;
    COMMENT ON TABLE "SYS"."MGW_GATEWAY"
    IS
      'Messaging Gateway status and configuration information';

Maybe you are looking for

  • Why doesn't version 17.0.0.169 show up in IE11 8.1 64-bit Manage Add-Ons?

    KB3044132 shows as installed for Flash Player update, and the Control Panel Flash Player icon for 32-bit shows 17.0.0.169 installed (I have 64-bit and 32-bit), but the Manage Add-Ons for Shockwave is Enabled and showing version 17.0.0.134! When I che

  • Powerbook G4 will not install OS X

    Hey guys, so my sister gave me her old Powerbook G4 15inch 1.5 ghz. She had a lot of stuff on it and I had the great idea of reinstalling the system software and starting fresh. I did the following: 1) Started up in Target Disk Mode 2) Plugged it int

  • Acrobat 9 Pro Extended: Can one enable commenting by default?

    For Acrobat 9 Pro Extended: When using Save As to save a PDF, is it possible to have Commenting and Analysis enabled for that saved PDF by default? What I currently must do: Open the PDF, choose Comments > Enable for Commenting and Analysis in Adobe

  • Recording from record onto my Mac

    Hi....can anyone help! I've got a record player that I want to hook up with my Mac and record my records into it so I can use them on my ipod?! Chris

  • Skype video call recorded and blackmeeling

    Me also firt got kne reqedt from fb then on skype..and my personal she taken and upload on youtube and she taken all my friends profile link on fb and balackmeeling for money.what i do?any solution i sended money already but if she ask moren and more