Writing to BLOB gets slow as table grows

Hi,
We have a table with a BLOB column. When the table is empty it takes a few second to insert 3k rows. As the table grows to 500k rows, it takes almost 5 minutes to insert the same # of rows, and it's getting slower and slower. At the beginning, I thought it was because of indexes and constraints, so I disabled all foreign key constraints (have to leave primary keys and indexes there), it doesn't make much difference. I don't see other tables with indexes and constraints and even more # of rows but w/o BLOB column have such significant performance degradation. The BLOB is stored out of line in a separate tablespace with 8k block size and chunk size.
Do you have any idea what may cause the slowness in insert?
Thanks.

If the tablespace is ASSM (segment space management AUTO), there are issues with LOB performance. If you can, move the table and it's LOB to an MSSM tablespace and re-test your inserts.
BTW, what version and patchlevel are your running ?

Similar Messages

  • Index Organized Table getting slow over time

    Hello,
    I recently discovered a problem in our our production environment:
    We use an IOT to store search results in the form of primary keys (and additional data). This results in an excessive amount of INSERTs and DELETEs (the application is mainly a research tool). After some time, some statements on the IOT like SELECT COUNT(*) (optimizer using INDEX FAST FULL SCAN) take longer and longer, however the IOT index is used correctly by the optimizer and SELECTs which specify WHERE primk... (optimizer using INDEX RANGE SCAN) are executed with best performance.
    It seems that the IOT somehow gets fragmented or unbalanced. Now my question ist, what's would be the best approach against that problem.
    Would a daily ALTER TABLE ... MOVE ONLINE; be a good solution for example? Is this common practice with high volatile IOTs in 24x7 production environments?
    Are there any better solutions for storing temporary search results?
    Looking forward for your comments.
    -sd

    In addition to above mentioning...
    Have you verified the distribution of IOT?
    Like "ANALYZE INDEX .... VALIDATE STRUCTURE"? or "DBMS_STATS.GATHER_TABLE_STATS"?
    It's natural that your full table scan(index fast full scan) gets slower in case that your IOT is getting bigger.
    If you have lots of free leaf nodes, moving index will be beneficial although moving itself should be abvoided on busy time.
    You also need to do sql trace and tkprof to measure how much workload your IOT is making.

  • Prepared Statemet executeQuery() getting slower and slower

    Hi,
    I have a servlet that do the following:
    1.- Construct the where clause for a query with the data the user has sent.
    2.- Get a connection from the pool.
    3.- Load a temporary table with a select that uses the where clause created on the first step.
    4.- Create a prepared statement as simple as "select * from temp_table where rownum < 500"
    5.- While there is still data on the temporary table, loop
    5.1.- Extract the information of the result set the Prepared Statemet executeQuery() returns.
    5.2.- Delete the rows of the temporary table that had been read on the 5.1 step
    5.3.- Check if there is more data
    6.- Close the result set, statements, ...
    The first runs of the loop are very fast, on the order of 500 ms. When the servlet has executed some times the loop, each run of the loop starts getting slower and slower, growing the time of the loop with each run.
    Do anyone knows why the loop takes longer and longer with each run?
    Regards,
    CptnAgua

              ...     initialization of the servlet and reading the parameters
              // The connection object is created and get a connection
              // from the application servlet pool
            ConexionBaseDatos bbdd;
            bbdd = new ConexionBaseDatos();
            try {
                bbdd.conecta();
            } catch (Exception e) {
                   ... exception code
            String sql;
            String where = " where 1=1";       
                   the where clause is created depending on the parameters received by the sevlet
            String insert;
              // here all the variables are created and most of them initialized
            insert = "insert into temp_table select * from conciliation_v x " + where;
            Statement stmt = null;
            PreparedStatement prepStmt = null;
            ResultSet rs = null;       
            SimpleDateFormat formatoFecha = null;
            Date hoy = null;
            FileWriter fstream = null;
            BufferedWriter file = null;
            String rowid_update = "";
            String rowid_select = "";
            formatoFecha = new SimpleDateFormat("yyyyMMddHHmmss");
            hoy = new Date();
            nombreFichero = "";          
              // the floder where the servlet is going to store the output is read
              // from the server.xml
            nombreFichero =
                    this.getInitParameter("DirectorioEscritura") + nombreFichero;
            File fichero = new File(nombreFichero);
              // if the file already exists, the servlet stops.
            if (fichero.exists()) {
                out.println("El fichero especificado ya existe.");
                   ... exception code
                LogManager.shutdown();
                return;
            sql = "select * from temp_table where rownum < 500";
            fstream = new FileWriter(nombreFichero);
            file = new BufferedWriter(fstream);
            try {
                   // here the temp table is filled with the update string already defined
                stmt = bbdd.getStmt();           
                stmt.executeUpdate(insert);           
                   // and the query is preparsed
                prepStmt = bbdd.getPrepStmt(sql);
                prepStmt.setFetchSize(500);                       
            } catch (SQLException e) {
                   ... exception code
            String linea;
            String field1 = "";
            String field2 = "";
            String field3 = "";
            String field4 = "";
            String field5 = "";
            String field6 = "";
            String field7 = "";
            String field8 = "";
            String field9 = "";
            String field10 = "";
            String field11 = "";
            String field12 = "";
            String field13 = "";       
            boolean hayValores = true;       
              // the loop starts          
            while (hayValores) {
                hayValores = false;           
                try {
                    // The preparsed statement is executed
                    rs = prepStmt.executeQuery();               
                } catch (SQLException e) {
                             ... exception code
                try {
                    while (rs.next()) {
                        hayValores = true;
                        try {
                            field1 =
                                    lPad(rs.getString("field1"), 10); //50
                            field2 =
                                    lPad(rs.getString("field2"), 18); //18
                            field3 =
                                    lPad(rs.getString("field3"), 10); //0
                            field4 =
                                    lPad(rs.getString("field4"), 50); //50
                            field5 =
                                    lPad(rs.getString("field5"),
                                         20); //10 + number
                            field6 =
                                    lPad(rs.getString("field6"), 16); //number
                            field7 =
                                    lPad(rs.getString("field7"),
                                         10); // 10
                            field8 =
                                    lPad(rs.getString("field8"), 10); //10
                            field9 = lPad(rs.getString("field9"), 1); //1
                            field10 = lPad(rs.getString("field10"), 10); // 0
                            field11 = lPad(rs.getString("field11"), 10); // 0
                            field12 = lPad(rs.getString("field12"), 10); // 0
                            field13 = lPad(rs.getString("field13"), 10); // 0
                            rowid_update =
                                    rowid_update + "rowid = '" + rs.getString("a_rowid") +
                                    "' OR ";
                            rowid_select =
                                    rowid_select + "'" + rs.getString("a_rowid") +
                        } catch (Exception exp) {
                                       ... exception code
                        linea =
                                field1 + field2 + field3 + field4 +
                                field5 + field6 +
                                field7 + field8 +
                                field9 + field10 + field11 + field12 + field13 +
                                "\r\n";
                        file.write(linea);                   
                } catch (Exception e) {
                             ... exception code
                file.flush();
                if (hayValores) {
                    String delete;
                    delete =
                            "delete from temp_table where a_rowid in (" +
                            rowid_select.substring(0, rowid_select.length() - 2) +
                    rowid_select = "";
                    try {                   
                        stmt.executeUpdate(delete);
                    } catch (Exception exp) {
                                  ... exception code
            file.close();
            try {
                rs.close();
            } catch (Exception e) {
                        ... exception code
            try {
                prepStmt.close();
            } catch (Exception e3) {
                        ... exception code
                   Do the already extracted lines must be flagged as extracted?
            if (tipoExtraccion.toUpperCase().equals("AUTO") ||
                marca.toUpperCase().equals("S")) {
                if (!(rowid_update.equals(""))) {
                    rowid_update =
                            rowid_update.substring(0, rowid_update.length() - 3);
                    String update =
                        "UPDATE sys_conciliation.conciliation " + "SET FECHA_INTEGRADO = SYSDATE " +
                        "WHERE " + rowid_update;               
                    try {
                        stmt.executeUpdate(update);
                        bbdd.commit();
                    } catch (SQLException e) {
                                  ... exception code
            try {
                bbdd.close();
            } catch (Exception e) {
                        ... exception code
            try {
                stmt.close();
            } catch (Exception e) {
                        ... exception code
            out.println("OK");
            out.close();
            LogManager.shutdown();
        public String rPad(String campo, int longitud) {
            if (campo == null) {
                campo = "";
            int lcampo = campo.length();
            if (lcampo > longitud)
                return (campo.substring(0, longitud));
            if (lcampo == longitud)
                return (campo);
            String nuevoCampo = campo;
            for (int a = 0; a < longitud - lcampo; a++) {
                nuevoCampo = nuevoCampo + " ";
            return (nuevoCampo);
        public String lPad(String campo, int longitud) {
            if (campo == null) {
                campo = "";
            int lcampo = campo.length();
            if (lcampo > longitud)
                return (campo.substring(0, longitud));
            if (lcampo == longitud)
                return (campo);
            String nuevoCampo = campo;
            for (int a = 0; a < longitud - lcampo; a++) {
                nuevoCampo = " " + nuevoCampo;
            return (nuevoCampo);
    }Message was edited by:
    CptnAgua

  • Disadvantages of letting FACTFINANCE table grow bigger day by day?

    Hi All,
    Please share your inputs on the Disadvantages and Advantages of letting FACTFINANCE table grow bigger day by day.
    As far as i know there's no theoretical limit to the number of records.
    If there is any solution or process followed to keep only actively used data records in FACTFINANCE and smart way in dealing with very rarely used data (Historical data).
    Please suggest/comment it will greatly help us!
    Regards,
    Rajesh Muppala.

    The main disadvantage of a huge factfinance table will be initially the process time of changing a dimension hierarchy. I have heard of processing taking 8 hours when a large fact table is involved. I am not sure of a theoretical limit. I suspect the limit will be more of the operating system limits rather than SQL server limits.
    Of course performance will slow when the fact table gets large to. So archiving off historic ( and not used) data is a good idea.
    To create a system where only actively used records are kept would need to be a custom procedure where you;d need to analyse the queries coming from the client side. It seems a very complex method that could land up with errors and dad corruption.
    I personally would rather make a business decision about it and archive a few years off into a separate application set. In an ideal world, it could be on a separate server as well so the productions system is not affected in any way.
    Another option would be to make custom partitions in the finance cube. This would mean the same fact table is used but different partitions for each year (for example).
    Tim

  • How to Ftp a Blob attachment in a table using osb service

    How to Ftp a Blob attachment in a table using osb service
    I tried with DBadapter select it does not work ... For blob objects cant use select throws error as expected number got blob
    can call a stored procedure to write the file to some directory but that file will not be created with the same name of the file as stored in DB .. need to hard code the filename in utl file or if we pass a variable to get the name of the attachment file we have to use select query in cursor which throws error like above in step 2
    Can some body tell me how to get the blob attachment with the same name as it is stored in DB table
    Edited by: user13745573 on Jan 31, 2011 4:35 AM

    Hi,
    I want to send an attachment through email.
    But I want to pick the file from say local drive then how can i specify the path of the file in file name.
    Also, i dont want ot append the conent from payload to the file. i wanted to sendthe file as is.
    <ema:attachment>
    <ema:name></ema:name>
    <ema:type>text/plain</ema:type>
    <ema:content/>
    </ema:attachment>
    Please help.

  • How to transfer a blob column in a table to another blob column in another

    Well as the title suggest, i'm trying to take the BLOB field in say my table Photo and put it in another BLOB field in my table Members. How would i do that?
    I'm working in Oracle Forms 10G.
    I tried to do a cursor that get the picture(BLOB) and i tried to fetch the result in the other table :Members.Photo but it says my link is not good. I also tried using variables but didn't do the trick either.
    The thing is we have a system that once the picture is taken its automatically put in the Photo table (contain only one picture at a time, only one row) and then in Forms we have have a message asking to press ok when the photo has been taken and then it should insert it in the Members table....
    if more information is required please feel free to ask!
    Thanks in advance.

    INSERT INTO CLIENTS_PHOTOS (
      CLPHOTO_ID,
      CL_ID,
      DTHRS_PHOTO,
      PHOTO,
      SIGNATURE,
      CREE_LE,
      CREE_PAR,
      MODIFIE_LE,
      MODIFIE_PAR
    )  SELECT :NOUVELLE_PHOTO.CLPHOTO_ID,
              CLIENT.CL_ID,
              SYSDATE,
              PS.PHOTO,
              PS.SIGNATURE,
              SYSDATE,
              :NOUVELLE_PHOTO.L_CD_USER,
              SYSDATE,
              :NOUVELLE_PHOTO.L_CD_USER
         FROM PHOTO_SIGNATURE PS;Looks quite ok so far. Did you try to issue that insert manually in SQL*plus? Also, is it correct that there is no WHERE-condition in your select?

  • MVC Problem with getter method of table attribute in model class

    Hi,
    I am on 620 SP34. I am writing a bsp application with mvc. One of the model classes has an attribute of type table. I use this attribute in a htmlb-tableview and '//MODEL/ZMY_TAB' for data binding. If I try to activate a getter method for this attribute, the application dumps with exception <i>BSP exception: Structure component with name "ZMY_TAB" does not exist</i>. I find the SAP source, that raising this exception (see below). The source code looks like: <i>"I don't support getter methods for tables in attribute path"</i>! The setter method works fine, so I am at a loss. Has anyone of you wrote a getter method for an table attribute in bsp-mvc? Have I to consider anything special?
    Thanks,
    Carsten
    Main Program CL_BSP_MODEL==================CP
    Source code of CL_BSP_MODEL==================CM00Z
    METHOD IF_BSP_MODEL_BINDING~GET_ATTRIBUTE_DATA_REF
           * check if attribute exists for binding!                                   
             if exists_attribute( l_name ) is initial.                                
               return.                                                                
             endif.                                                                               
    * setter or getter defined? Not supported for DATA REF requests            
             if get_getter( attribute_name = l_name ) is not initial.                 
               raise exception type cx_bsp_inv_component                              
                 exporting name = l_name.                                             
             endif.                                                   

    You have two options:
    1. Make your attributes public. It should work fine.
    2. If you need to process the attribute values before it is used, you can make the attribute private but will need three methods
    GET_T_ZMY_TAB that returns the table
    SET_T_ZMY_TAB that sets the values
    GET_M_T_ZMY_TAB that returns DDIC information about the attribute. The same holds good for structures(Change to GET_S_ and GET_M_S_ ) and simple attributes(Change to GET_ and GET_M_).
    The set and get methods are kind of documented at http://help.sap.com/saphelp_nw04/helpdata/en/fb/fbb84c20df274aa52a0b0833769057/content.htm but there is no mention of the GET_M_ methods. I could not find one single document on the Model part MVC.
    Once I added the GET_M_XYZ methods to my attributes, my BSPs started to work fine.
    Cheers
    Sreekanth

  • Table grows to 6 GB with 6k records only after Delete ORA-01653:

    Hello,
    I have a Table that i delete data from using
    DELETE FROM DJ_20255_OUTPUT a where trunc(a.LOADED_DATE) <trunc(sysdate -7);
    COMMIT;
    the issue i have is when i want to repopulate the table i get the Error ORA-01653: unable to extend table.
    The table grows to over 6gb but if i truncate that table and in both cases there is no data left in the table after either action the table will only grow to 0.8 Mb once populated.
    so with truncate table size is 0.8MB and if i use delete from.... the table grows to 6GB
    The repopulation of the table uses mutiple insert statments commiting after each one.
    is this a bug or is there an action i should perform onece I have deleted the data?
    Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bi

    Is this an Index Organized table ? (select IOT_TYPE from user_tables where table_name = 'DJ_20255_OUTPUT' ;)
    Are you saying that you use this sequence :
    DELETE .... in one single call to delete all the rows
    INSERT ... in multiple calls with COMMIT after each row
    Hemant K Chitale

  • Does iCal's speed slow as file grows?

    I'm concerned that iCal is going to slow down as the program gets loaded up with events and to do lists.
    Is anybody using iCal for a small business making 20 to 30 entries per day and seeing a slowdown in the program's speed?
    We sync. iCal with three computers and we need the program remain fast to use and sync. for our staff.
    Is this free iCal program adequate for a small business or should I be looking at a program like Now Up to Date & Contact for my needs?
    I spoke to support at BusySync (busymac.com) as an alternative to .Mac sync. for iCal and they said the had no reports of iCal slowing down as the file gets larger.
    Any comments are appreciated.

    It's worse than that, iCal doesn't seem to slow as files grow, it seems to slow randomly and no one on this forum has the faintest idea why. Search "slow" and look back. There are hundreds of entrys from people who've experienced iCal slowing to a snail's pace and no one has a solution. I made the mistake of moving my business (maybe 1 or 2 entries a day) to iCal and now I'm stuck waiting 10 seconds everytime I want to make the smallest change. AM to PM - 10 seconds, one calendar to another - 10 seconds. It takes even longer to duplicate an entry or copy an entry. I tried to change fourteen entries to a different calendar today and had to sit through seven minutes of the spinning beachball waiting for that simple change. It doesn't sync reliably with the four other Macs I own and I'm also convinced it's crashing iTunes on me everytime I try to sync Contacts and Calendars. I've been a Mac devotee since 1987 and I swear this is the worst single program Apple has ever written. In System 9 I was able to write a better calendar program in Hypercard. It's not worth the time it takes to install. I'd think twice before I used it for anything important!
    I wish someone would at least acknowledge the problem!!!!
    Peter May

  • Why is Oracle Response time getting slow with time.

    Hi,
         I have DB which was very fast initially with the response time for one of the query < 5 sec.
         I have been using the DB for the last 15 days. Now the same query is taking 10 minutes. In the DB there are lot of operations of additions and deletions been done on the table where the query is being made. The no. of records in the table is constant at around 3 million records from the first day.
         If I import the DB into a new setup then again the response time becomes very good in the new setup.
         What should be the problem of the DB getting slow with the time.
    Thanks,
    Tuhin

    It all depends on several factors.
    Are your tables,indexes have upto-date statistics?
    I have DB which was very fast initially with the response time for one of the query < 5 sec. Initially there might be small amount of data later data might have increased,you dont have proper indexes.
    It could be that your indexes got fragmented to due to heavey deletes? It might need reorg.
    My suggestion would to look into your execution plan of the quries and see where your kernals are waiting.
    As other suggested you, use explain plan, event 10046 and tkprof.
    Jaffar

  • First Query Runs Fast.  Subsequent Queries Get Slower

    I am using JDeveloper 11.1.1.6.
    I have a SelectOneChoice.
    I have 2 tables that get updated when the SelectOneChoice changes.
    There are only a few records displayed for each selection.
    When the table initially loads, it loads quickly.
    Each time I change the SelectOneChoice, the table load gets slower and slower.
    Could this be a memory issue?

    Frank,
    I couldn't find any tables in the HR schema alone that I could set up this way.
    I needed a table where each record had multiple records in 2 different tables.
    What I did was used the Employees table in the HR schema and the Orders and Customers table from the OE schema.
    My goal was to create a test where I would select an employee from a selectOneChoice and have the Orders and Customers table populate based on the Employee selection.
    I created 3 Entity objects (Employees, Customers, Orders).
    This automatically created the appropriate Associations and Links.
    I added an LOV for the EmployeeId field on the Employees table.
    I dragged the EmployeeId field from DataControls to my page as a SelectOneChoice.
    I dragged Orders and Customers from DataControls to my page as tables.
    I setup the properties for each control (AutoSubmit and PartialTriggers).
    I debugged my page.
    As soon as I attempted to change my Employee, I get an error "Too many objects match the primary key oracle.jbo.key[200]".
    Apparantly, my goal was not satisfied.
    Any thoughts?

  • Getting data from table BSEG taking too long ... any solutions.

    Hello people I am currently trying to get data from table BSEG for one particular G/L Account Number With restrictions using For All Entries.
    The problem is that even with such tight restrictions its causing my report program to run way too slow. I put an option where you dont have to access table bseg. And it runs just fine. (all of this is done during PRD Server).
    My question is
    1.) How come BSEG seems to make the report slow, even though I put some tight restrictions. <b>Im using For All Entries where Zuonr eq i_tab-zuonr</b>it seems to work fine in DEV and <b>hkont EQ '0020103101'</b>(Customer Deposits).
    2.) Is there a way for me to do the same thing as what I mentioned in #1 but only much faster.
    Thanks guys and take care

    Hi
    It should be better you don't read BSEG table if you haven't the keys BUKRS and BELNR, because the reading could take many times if there are many hits.
    If you want to find out the records of G/L Account it's better read the index table BSIS (for open items) and BSAS (for cleared items), here the field HKONT is a key (and ZUONR too). So you can improve the performance:
    DATA: T_ITEMS LIKE STANDARD TABLE OF BSIS.
    SELECT * FROM BSAS INTO TABLE T_ITEMS
      FOR ALL ENTRIES I_ITAB WHERE BUKRS = <BUKRS>
                               AND HKONT = '0020103101'
                               AND ZUONR = I_ITAB-ZUONR.
    SELECT * FROM BSIS APPENDING TABLE T_ITEMS
      FOR ALL ENTRIES I_ITAB WHERE BUKRS = <BUKRS>
                               AND HKONT = '0020103101'
                               AND ZUONR = I_ITAB-ZUONR.
    Remember every kind of item has an own index tables:
    - BSIS/BSAS for G/L Account
    - BSIK/BSAK for Vendor
    - BSID/BSAD for Customer
    These table have the same informations you can find out from BSEG and BKPF.
    Max

  • OAI_AGENT_ERROR table growing rapidly

    Hi
    I have a problem with my oai_agent_error table growing at a rate of a 1000 records every few minutes. My DB adapter reports the following error before dropping the messages,
    oracle.oai.agent.server.transform.database.DBTransformationException: PLSQLTransformation.transform: Transformation failed
    at oracle.oai.agent.server.transform.database.PLSQLTransformation.transform(PLSQLTransformation.java:359)
    at oracle.oai.agent.server.transform.BuiltInTransformation.transform(BuiltInTransformation.java:293)
    at oracle.oai.agent.server.transform.MessageTransformer.processStackFrame(MessageTransformer.java:849)
    at oracle.oai.agent.server.transform.MessageTransformer.processTransformMetadatalet(MessageTransformer.java:489)
    at oracle.oai.agent.server.transform.MessageTransformer.transformMessage(MessageTransformer.java:276)
    at oracle.oai.agent.server.InMessageTransformer.processObject(InMessageTransformer.java:87)
    at oracle.oai.agent.common.QueueAgentComponent.run(QueueAgentComponent.java:110)
    at java.lang.Thread.run(Thread.java:534)
    We have a custom PL/SQL package that does the AV to CV transforms for us and this is valid at the moment.
    I also see that its not all messages that are getting dropped as some messages have reached my Spoke systems properly transformed.
    I would really like to know how to debug this further to stop these messages from being dropped.
    Any help is much appreciated!
    Thank you

    HI
    "oracle.oai.agent.server.transform.database.DBTransformationException: PLSQLTransformation.transform: Transformation failed" in the log file means this error occured at agent level /application level.So,using metadata interconnect is not able to convert input data in to application view.
    Error is transformation failed.So reason can be
    --> any unhandled exceptions
    --> mismatch in data type of fields/parameters used.
    --> mis match in number of field/elements in input data.
    Transformation failed indicates interconnect not able to transform input data in to sepcified format in Application view or common view.
    As you mentioned,this is happening only with few records.
    --> check if failed records are to test negative cases.
    --> check if failed records have all values expected as mandatory for transformation.
    --> check if failed records have the data with expected datatypes(For eg field a is defined as number and field a in record have value 'abc' instea dof 123).
    Hope this information helps you.

  • Mail Getting Slower and Slower

    Hello all -
    I am running Mail 1.3.11 on my 733MHz G4 with OSX 10.3.9. My internet connection is Verizon's 768/128 DSL.
    Mail is getting slower and slower, especially sending. (I have timed some outgoing messages at 2 Mb/sec - nowhere near the theoretical 128 Mb.)
    I am not allowing my In Box to accumulate many emails. But my Sent Box is quite large and growing larger, since I've been reluctant to discard many of them.
    Is Mail's slowness be due to my growing Sent Box? Is there anything else I should look at or do? (My internet browser - Firefox - does not appear to have the same problem.)
    Thanks in advance for any suggestions -
    - OF

    hello,
    With Mail closed, you could always test whether your sent mailbox is too large by copying it in Finder.
    ~/Library/Mail/YourEmailaccountName/Sent Messages.mbox
    and pasting it to say, the Documents folder, then in Mail you could delete all of your sent messages and see if that makes a difference.
    If you wanted the sent.mbox back you would import it from where you copied it to, via Mail > File, Import Mailboxes and select it. I don't know if this is the problem though, but you could at least exclude it as a possiblity.

  • Report with a Download link for a Pdf stored in Blob in database FND tables

    We attach a pdf file in the Receivables invoices in Oracle EBS. We use APEX to report from EBS tables. We have a requirement to have a APEX report to display the pdf attached to the invoices with a capability to download the pdf. These pdf are stored in a blob column in fnd_lobs table.
    Any pointers on how to approach is highly appreciated.
    Thanks
    Jo

    check this How to Upload and Download Files in an Application
    let me know if you have any doubts

Maybe you are looking for

  • Transferring book to ereader

    After I download my books from the library and they appear in adobe digital editions library I cannot transfer them to my ereader.After I have dragged my book over to my ereader a message comes up that I have no access.

  • Address book-multiple names in 1 entry

    How do I add multiple names to one address card? For example, if a married couple has different last names, like John Smith and Jane Doe, when I try to enter in the name field it only lets me enter 1 last name and 1 first name.

  • CU51 really necessary?

    Hi all, We enter KMAT and configuration values in SD order. In order to have our BOM exploded according to configuration in Production order, we need to go CU51 and save the configuration. The question is, CU51 is really necessary for activating conf

  • How to Cancel Payment and subscription to go to an...

    How do I cancel a subscription and receive my money back on my credit card so that I can do another subscription?

  • Deleting old messages in Messages Beta

    On the iPad and iPhone, deleting old messages is easy. On Mac, the "cut" key is inactive and whatever I do (including deleting iChat files storing messages) old messages keep coming back every time opening Messages Beta. How do I get rid of them once