Socket data transfer too long

          hi
          when i start up the 1st WLS for clustering, the following exception message is
          shown constantly:
          <MulticastSocket> Error sending blocked message
          java.io.IOException: A message for a socket data transfer is too long.
          at java.net.PlainDatagramSocketImpl.send(Native Method)
          what could cause the exception and is it a fatal error?
          appreciate if any help can be given.
          thanx ...
          yunyee
          

          hi
          i'm running on AIX 4.3.3.
          the jdk version is 1.2.2
          Viresh Garg <[email protected]> wrote:
          >Could you also post your OS and JDK version.
          >Viresh Garg
          >
          >Yun Yee wrote:
          >
          >> hi Shiva
          >>
          >> i'm running WLS5.1. and i have Commerce Server 3.1 as well.
          >> i have service pack 8.
          >> can the exception be rectified by service packs?
          >>
          >> thanx
          >>
          >> yunyee
          >>
          >> "Shiva" <[email protected]> wrote:
          >> >
          >> >Hi,
          >> >Would help if you could also tell about the WLS version, Service Packs
          >> >and the
          >> >environment you are using.
          >> >
          >> >Shiva.
          >> >
          >> >"Yun Yee" <[email protected]> wrote:
          >> >>
          >> >>hi
          >> >>
          >> >>when i start up the 1st WLS for clustering, the following exception
          >> >message
          >> >>is
          >> >>shown constantly:
          >> >><MulticastSocket> Error sending blocked message
          >> >>java.io.IOException: A message for a socket data transfer is too
          >long.
          >> >> at java.net.PlainDatagramSocketImpl.send(Native Method)
          >> >>
          >> >>what could cause the exception and is it a fatal error?
          >> >>
          >> >>appreciate if any help can be given.
          >> >>thanx ...
          >> >>
          >> >>yunyee
          >> >>
          >> >
          >
          

Similar Messages

  • Data record too long to be imported (0 or 5000)

    Dear Expert
    in LSMW While display the read record this is the error comes
    "Data record too long to be imported (0 or >5000)"
    how to rectify this?
    but system allows for further steps and upload the maser record
    Regards
    Karan

    Hi,
       Hope the same question is already answered in the thread: Error in uploading master through LSMW
       Please check and revert back if its not solved.
    Regards,
    AKPT

  • TDMS Data transfer Step : Long runtime for table GEOLOC and GEOOBJR

    Hi Forum,
    The data transfer for tables GEOLOC and GEOOBJR is taking too long (almost 3 days for 1.7 million records). There are no scrambling rules applied on this tables. Rather I am using TDMS for the first time so I am not using any scrambling rules to start with.
    Also there are 30 process which have gone in error. How to run those erreneous jobs again??
    Any help is greatly appreciated.
    Regards,
    Anup

    Thanks Harmeet,
    I changed the write type for those activities and re-executed them and they are successfully completed.
    Now the data transfer is complete but I see a difference in no of records for these two tables (Geoloc and Geoobjr).
    Can you please let me know what might be the reason?
    Regards,
    Anup

  • R/3 Extraction taking too long to load data into BW

    HI There,
    I'm trying to extract SAP Standard extractor 0FI_AP_4 into BW, and its taking endless time.
    Even the Extract checker RSA3  is taking too long to execute the data. Dont know why its taking so long to execute.
    Since there in not much data to take such a long time.
    Enhanced the datasource with three fields from BSEG using user exits.
    Is that the reason why its taking too long? Does User Exit slows down the extraction process?
    What measures i should take to quicken the process?
    Thanks for your time
    Vandana

    Thanks for all you replies.
    Please go through the steps I've gone through :
    - Installed the Business Content and its in version 3.5
    - Changed the update rules, Transfer rules and migrated the datasource to BI 7
    - Enhanced the 0FI_AP_3 to include three fields BSEG table
    - Ran RSA3 and the new fields are showing but the loading is quite slow.
    - Commented the code and ran RSA3 and with little difference data is showing up
    - Removed the comments and ran, its fine, though it takes little more time then previous step...but data is showing up
    - Replicated the datasource into BW
    - Created the info package and started the init process (before this deleted the previous stored init process)
    - Data isn't loading and please see the error message below.
    Diagnosis
    The data request was a full update.  In this case, the corresponding table in the source system does not
    contain any data. System Response Info IDoc received with status 8. Procedure Check the data basis in the source system.
    - Checked the transformation between datasource 0FI_AP_4 and Infosource ZFI_AP_4
       and I DID NOT found the three fields which i enhanced from BSEG table in the 0FI_AP_4 datasource.
    - Replicated the datasource 0FI_AP_4 again, but no change.
    Now...I dont know whats happening here.
    When i check the datasource 0FI_AP_4 in RSA6, i can see the three new fields from BSEG.
    When i check RSA3, i can see the data getting populated with the three new fields from BSEG,
    When i check the fields in the datasource 0FI_AP_4 in BW, I can see the three new fields. It shows
    that the connection between BW and R/3 is fine, isn't it?
    Now...Can anyone please suggest me how to go forward from here?
    Thanks for your time
    Vandana

  • New Socket takes too long / how to stop a Thread ?

    Hello to everyone,
    I have a problem that I have been hunting this ol' Sun website all day for a suitable answer, and have found people who have asked this same question and not gotten answers. Finally, with but a shred of hope left, I post...
    My really short version:
    A call to the Socket(InetAddress,int) constructor can take a very long time before it times out. How to limit the wait?
    My really detailed version:
    I have a GUI for which the user enters comm parameters, then the program does some I/O while the user waits (but there is a Cancel button for the Impatient), then the results are displayed. Here is quick pseudocode (which, by the way, worked great before there were Sockets in this program -- only serial ports):
    Main (GUI) thread:
         --> reset the stop request flag
         --> calls workerThread.start(), then brings up a Cancel dialog with a Cancel button (thus going to sleep)
         --> (awake by dialog closing -- dialog may have been closed by workerThread or by user)
         --> set stop request flag that worker thread checks (in case he's still alive)
         --> call workerThread.interrupt() (in case he's alive but asleep for some reason (???))
         --> call workerThread.join() to wait for worker thread to be dead (nothing to wait for if he's dead already)
         --> display worker thread's result data or "Cancelled..." information, whichever worker thread has set
    Worker thread:
         --> yield (to give main thread a chance to get the Cancel Dialog showing)
         --> do job, checking (every few code lines) that stop request flag is not set
         --> if stop request, stop and set cancelled flag for Main thread to handle
         --> if finish with no stop request, set result data for Main thread to handle
         --> take down Cancel Dialog (does nothing if not still up, takes down and wakes main thread if still up)
    THE PROBLEM: Worker thread's job involves doing IO, and it may need to instantiate a new Socket. If it is attempting to instantiate Socket with bad arguments, it can get stuck... The port int is hardcoded by the program, but the IPAddress must be typed by user.
    If the arguments to Socket(InetAddress, int) constructor contain a valid-but-not-in-use IP address, the worker thread will get stuck in the Socket constructor for a LONG time (I observed 1m:38s). There is nothing the Main thread can do to stop this wait?
    EVEN WORSE: If the user clicks the Cancel Button during this time, the dialog goes away soon/immediately, but then the GUI appears to be hanging (single-threaded look) until the long wait is over (after which the "Cancelled..." info is displayed).
    MY QUESTION: Is there nothing the Main thread can do to stop this wait?
    Your answers will be sincerely appreciated. Despite my hopeless attitude (see above), the folks at this forum really have yet to let me down ...
    /Mel

    http://developer.java.sun.com/developer/technicalArticles/Networking/timeouts/

  • Report script taking too long to export data

    Hello guys,
    I have a report script to export data out of a BSO cube. The cube is 200GB in size. But the exported text file is only 10MB. It takes around 40 mins to export this file.
    I have exported data of this size in less than a minute from other DBs. But this one is taking way too long for me.
    I also have a calc script for the same export but that too is taking 20 mins which is not reasonable for a 10MB export.
    Any idea why a report script could take this long? Is it due to huge size of database? Or is there a way to optimize the report script?
    Any help would be appreciated.
    Thanks

    Thanks for the input guys.
    My DATAEXPORT is taking half the time than my report script export. So yeah it is much faster but still not reasonable(20 mins for one month data) compared to other DBs that are exported very quick.
    In my calc I am just FIXING on level 0 members for most of the dimensions against the specific period, year and scenario. I have checked the conditions for an optimal report script, I think mine is just fine.
    The outline has like 15 dimensions in it and only two of them are dense. Do you think the reason might be the huge size of DB along with too many sparse Dims?
    I appreciate your help on this.
    Thanks

  • I bought a new iMac today. I'm using migration assistant to move all my software, but the time just keeps getting longer. It says connect an Ethernet cable for faster data transfer. I did, but that doesn't seem to help. Any ideas?

    I bought a new iMac today. I'm using migration assistant to move all my software, but the time just keeps getting longer. It says connect an Ethernet cable for faster data transfer. I did, but that doesn't seem to help. Any ideas?

    m1doc,
    Are you migrating from a Mac or a MS Window machine? Either way you probably should be in touch with AppleCare, you have 90 days of free AppleCare telephone support. They can usually help on issues like this. If you don't know the phone number please use http://support.apple.com/kb/HE57 to help find the number in your country.

  • I was backing up my iphone by changing the location of library beacause i don't have enough space.My phone was taking too long to copying file so i can celled it.the data is stored in desired location . And now i can't delete that back up

    I was backing up my iphone by changing the location of library because i don't have enough space.My phone was taking too long to copying file so i can celled it.the data is stored in desired location . And now i can't delete that back up.
    Also tell me about the performance of iphone 4 with ios 7.1.1...........
    T0X1C

    rabidrabbit wrote:
    Can I back up my iPhone 4S to my ipad 3 (64 gb)?
    no
    rabidrabbit wrote:
    However, now I don't have enough space in iCloud to backup either device. Why not?
    iCloud only give so much space for free storage, then if you exceed the limit of 5gb you have to pay for additional storage.

  • Java.sql.SQLException:ORA-01801:date format is too long for internal buffer

    Hi,
    I am getting the following exception when i trying to insert data in to a table through a stored procedure.
    oracle.apps.fnd.framework.OAException: java.sql.SQLException: ORA-01801: date format is too long for internal buffer
    when execute this stored procedure from ana anonymous block , it gets executed successfully, but i use a OracleCallableStatement to execute the procedure i am getting this error.
    Please let me know how to resolve this error.
    Is this error something to do with the Database Configuration ?
    Thanks & Regards
    Meenal

    I don't know if this will help, but we were getting this error in several of the standard OA framework pages and after much pain and aggravation it was determined that visiting the Sourcing Home Page was changing the timezone. For most pages this just changed the timezone that dates were displayed in, but some had this ORA-01801 error and some others had an ORA-01830 error (date format picture ends before converting entire input string). Unfortunately, if you are not using Sourcing at your site, this probably won't help, but if you are, have a look at patch # 4519817.
    Note that to get the same error, try the following query (I got this error in 9.2.0.5 and 10.1.0.3):
    select to_date('10-Mar-2006', 'DD-Mon-YYYY________________________________________________HH24:MI:SS') from dual;
    It appears that you can't have a date format that is longer than 68 characters.

  • Event Viewer cannot open the event Log or Custom view. Verify that the Event log service is running or query is too long. The instance name passed was not recognized as valid by a WMI data provider(4201).

    "Event Viewer cannot open the event Log or Custom view. Verify that the Event log service is running or query is too long. The instance name passed was not recognized as valid by a WMI data provider(4201)"
    This error keeps cropping up now and again on most of our domain controllers (OS-2008 AND 2008R2)...Usually a restart fixes the issue however the issue repeats and security logs don't generate.
    Any advice on how to fix this issue permanently would be greatly appreciated.

    Please see this: https://social.technet.microsoft.com/Forums/windows/en-US/95987ca3-a1b2-4da6-95b7-d825d06cdac7/error-code-4201-the-instance-name-passed-was-not-recognized-as-valid-by-a-wmi-data-provider?forum=w7itprosecurity
    You can also try rebuilding the WMI repository: http://blogs.technet.com/b/askperf/archive/2009/04/13/wmi-rebuilding-the-wmi-repository.aspx
    This posting is provided AS IS with no warranties or guarantees , and confers no rights.
    Ahmed MALEK
    My Website Link
    My Linkedin Profile
    My MVP Profile

  • Manage data element UMBSZ in Material Record : Entry too long

    Hello
    I would like to manage alternatives units of measure with a Numerator defined in more than 5 digits, in the material master record (transactions MM01, MM02).
    For example :
    A material with a base unit of measure = ST (Items) : view Basic data 1 of the material master record
    and the same material must be managed in Pallet : 1 Pallet = 150000 ST
    So In additionnal data of the material master record we must assign 1 Pallet (fields UMREN + MEINH) for 150000 ST (fields UMREZ + MEINS) 
    but when i assign 150000 in the field UMREZ i have the error message
    Entry too long (enter in the format __.___)
    Message no. 00089
    In fact The Field UMREZ is defined as DEC, Length = 5 and Decimal Places = 0
    So what could be the Solution? Is it possible to managed this field with more than 5 digits? Is it possible to do something in the transaction CUNI?
    Thank you for your answers
    Best Regards
    Manuel

    Did you find a solution for this?

  • Long field type data transfer

    hello
    I want to copy one table data into another table having long data type but it gives
    me error like
    SQL> insert into temp select * from temp1;
    insert into temp select * from temp1
    ERROR at line 1:
    ORA-00997: illegal use of LONG datatype
    both of the tables have long type field but it gives error in log type data transfer
    can somebody help me
    thanks

    LONG datatypes cannot be used for Insert with selects or CTAS.
    If the maximum length of the long datatype column is less than or equal to 4000 then you can use
    SQL> insert into temp select col1, col2, dbms_metadata_util.long2varchar(4000,'TABLE_NAME','COLUMN_NAME',rowid) Column_alias from temp1;
    If the maximum length of the long column is greater than 4000 then you can use
    SQL> insert into temp select col1, dbms_metadata_util.long2clob(40000,TABLE_NAME','COLUMN_NAME',rowid) Column_alias from temp1;
    Regards
    PantherHawk

  • I am looking for a way to enter multiple dates into a field without the form becoming too long.

    I am looking for a way to enter multiple dates into a field without the form becoming too long.
    This will be used by an old school bookeeper who needs the form to fit on one page.
    Any ideas?

    Hi,
    If you don't need the field to provide a date picker, verify it's a date, or don't need to sort the dates in the table, you can just use a text area field, and have your form filler enter the dates comma separated.  Otherwise you'd have to add multiple fields.  However, you can lessen the space each field takes up veritically, by using the "Labels Left" option (in the toolbar).
    Thanks,
    Todd

  • Data Archive Script is taking too long to delete a large table

    Hi All,
    We have data archive scripts, these scripts move data for a date range to a different table. so the script has two parts first copy data from original table to archive table; and second delete copied rows from the original table. The first part is executing very fast but the deletion is taking too long i.e. around 2-3 hours. The customer analysed the delete query and are saying the script is not using index and is going into full table scan. but the predicate itself is the primary key, Please help... More info below
    CREATE TABLE "APP"."MON_TXNS"
       (    "ID_TXN" NUMBER(12,0) NOT NULL ENABLE,
        "BOL_IS_CANCELLED" VARCHAR2(1 BYTE) DEFAULT 'N' NOT NULL ENABLE,
        "ID_PAYER" NUMBER(12,0),
        "ID_PAYER_PI" NUMBER(12,0),
        "ID_PAYEE" NUMBER(12,0),
        "ID_PAYEE_PI" NUMBER(12,0),
        "ID_CURRENCY" CHAR(3 BYTE) NOT NULL ENABLE,
        "STR_TEXT" VARCHAR2(60 CHAR),
        "DAT_MERCHANT_TIMESTAMP" DATE,
        "STR_MERCHANT_ORDER_ID" VARCHAR2(30 BYTE),
        "DAT_EXPIRATION" DATE,
        "DAT_CREATION" DATE,
        "STR_USER_CREATION" VARCHAR2(30 CHAR),
        "DAT_LAST_UPDATE" DATE,
        "STR_USER_LAST_UPDATE" VARCHAR2(30 CHAR),
        "STR_OTP" CHAR(6 BYTE),
        "ID_AUTH_METHOD_PAYER" NUMBER(1,0),
        "AMNT_AMOUNT" NUMBER(23,0) DEFAULT 0,
        "BOL_IS_AUTOCAPTURE" VARCHAR2(1 BYTE) DEFAULT 'N' NOT NULL ENABLE,
        "ID_USE_CASE" NUMBER(4,0) NOT NULL ENABLE,
        "ID_AUTH_METHOD_PAYEE" NUMBER(2,0),
         CONSTRAINT "CKC_BOL_IS_CANCELLED_MON_TXNS" CHECK (BOL_IS_CANCELLED in ('Y','N')) ENABLE,
         CONSTRAINT "PK_MON_TXNS" PRIMARY KEY ("ID_TXN")
      USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
      STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "LARGE_INDEX"  ENABLE,
         CONSTRAINT "FK_MON_TXNS_CURRENCIES" FOREIGN KEY ("ID_CURRENCY")
          REFERENCES "APP"."CURRENCIES" ("ID_CURRENCY") ENABLE,
         CONSTRAINT "FK_MON_TXNS_TO_PAYER" FOREIGN KEY ("ID_PAYER")
          REFERENCES "APP"."CUSTOMERS" ("ID_CUSTOMER") ENABLE,
         CONSTRAINT "FK_MON_TXNS_TO_PAYEE" FOREIGN KEY ("ID_PAYEE")
          REFERENCES "APP"."CUSTOMERS" ("ID_CUSTOMER") ENABLE,
         CONSTRAINT "FK_MON_TXNS_REFERENCE_TXNS" FOREIGN KEY ("ID_TXN")
          REFERENCES "APP"."TXNS" ("ID_TXN") ENABLE,
         CONSTRAINT "FK_MON_TXNS_TO_PI_PAYER" FOREIGN KEY ("ID_PAYER_PI")
          REFERENCES "APP"."PIS" ("ID_PI") ENABLE,
         CONSTRAINT "FK_MON_TXNS_TO_PI_PAYEE" FOREIGN KEY ("ID_PAYEE_PI")
          REFERENCES "APP"."PIS" ("ID_PI") ENABLE,
         CONSTRAINT "FK_MON_TXNS_TO_AUTHMETHOD" FOREIGN KEY ("ID_AUTH_METHOD_PAYER")
          REFERENCES "APP"."AUTHENTICATION_METHODS" ("ID_AUTHENTICATION_METHOD") ENABLE,
         CONSTRAINT "FK_MON_TXNS_USE_CASE_ID" FOREIGN KEY ("ID_USE_CASE")
          REFERENCES "APP"."USE_CASES" ("ID_USE_CASE") ENABLE,
         CONSTRAINT "FK_MON_TXN_AUTH_PAYEE" FOREIGN KEY ("ID_AUTH_METHOD_PAYEE")
          REFERENCES "APP"."AUTHENTICATION_METHODS" ("ID_AUTHENTICATION_METHOD") ENABLE
      CREATE INDEX "APP"."IDX_MON_TXNS" ON "APP"."MON_TXNS" ("ID_PAYER")
      PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
      STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "LARGE_INDEX" ;
      CREATE INDEX "APP"."IDX_PAYEE_MON_TXNS" ON "APP"."MON_TXNS" ("ID_PAYEE")
      PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
      STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "LARGE_DATA" ;
      CREATE INDEX "APP"."IDX_PYE_PI_MON_TXNS" ON "APP"."MON_TXNS" ("ID_PAYEE_PI")
      PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
      STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "LARGE_DATA" ;
      CREATE INDEX "APP"."IDX_PYR_PI_MON_TXNS" ON "APP"."MON_TXNS" ("ID_PAYER_PI")
      PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
      STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "LARGE_DATA" ;
      CREATE INDEX "APP"."IDX_USE_CASE_MON_TXNS" ON "APP"."MON_TXNS" ("ID_USE_CASE")
      PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
      STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "LARGE_DATA" ;
      CREATE UNIQUE INDEX "APP"."PK_MON_TXNS" ON "APP"."MON_TXNS" ("ID_TXN")
      PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
      STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "LARGE_INDEX" ;
    Data is first moved to table in schema3.OTW. and then we are deleting all the rows in otw from original table. below is the explain plan for delete
    SQL> explain plan for
      2  delete from schema1.mon_txns where id_txn in (select id_txn from schema3.OTW);
    Explained.
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    Plan hash value: 2798378986
    | Id  | Operation              | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | DELETE STATEMENT       |            |  2520 |   233K|    87   (2)| 00:00:02 |
    |   1 |  DELETE                | MON_TXNS   |       |       |            |          |
    |*  2 |   HASH JOIN RIGHT SEMI |            |  2520 |   233K|    87   (2)| 00:00:02 |
    |   3 |    INDEX FAST FULL SCAN| OTW_ID_TXN |  2520 | 15120 |     3   (0)| 00:00:01 |
    |   4 |    TABLE ACCESS FULL   | MON_TXNS   | 14260 |  1239K|    83   (0)| 00:00:02 |
    PLAN_TABLE_OUTPUT
    Predicate Information (identified by operation id):
    Please help,
    thanks,
    Banka Ravi

    'Best practice' is just what Oracle is already doing as you have already been told: DELETE FROM myTable WHERE myDate between myStart and Myend.
    Your use case is why many orgs elect to use partitioning and use that DATE column as the partition key. Then it is VERY FAST and VERY EASY to truncate or drop partitions that contain old data when you no longer need them.
    The other solution used is to quit waiting so long to delete data and then you don't have to delete large amounts at the same time. So instead of deleting data once a month delete it once a week or even every night. Then the number of rows being deleted will be much smaller and, if the stats are kept current, Oracle may decide to use the index.

  • SQL Statement taking too long to get the data

    Hi,
    There are over 2500 records in a table and when retrieve all using ' SELECT * From Table' it is taking too long to get the data. ie .. 4.3 secs.
    Is there any possible way to shorten the process time.
    Thanks

    Hi Patrick,
    Here is the sql statement and table desc.
    ID     Number
    SN     Varchar2(12)
    FN     Varchar2(30)
    LN     Varchar2(30)
    By     Varchar(255)
    Dt     Date(7)
    Add     Varchar2(50)
    Add1     Varchar2(30)
    Cty     Varchar2(30)
    Stt     Varchar2(2)
    Zip     Varchar2(12)
    Ph     Varchar2(15)
    Email     Varchar2(30)
    ORgId     Number
    Act     Varchar2(3)     
    select A."FN" || '' '' || A."LN" || '' ('' || A."SN" || '')'' "Name",
    A."By", A."Dt",
    A."Add" || ''
    '' || A."Cty" || '', '' || A."Stt" || '' '' || A."Zip" "Location",
    A."Ph", A."Email", A."ORgId", A."ID",
    A."SN" "OSN", A."Act"
    from "TBL_OPTRS" A where A."ID" <> 0 ';
    I'm displaying all rows in a report.
    if I use 'select * from TBL_OPTRS' , this also takes 4.3 to 4.6 secs.
    Thanks.

Maybe you are looking for