Address database perf with large tables for attachments (.doc,.jpg,.pdf..etc)

Hello Folks,
I have a database with large  tables that contain attachments of various kinds of files such as *.doc, *.docx, *.xlsx, *.jpg, *.pdf, *, *.jpeg. Overtime this database has grown so quickly and quite a handful to maintain.  
I've been thinking of a proper approach to redesign the database's tables' struct with a little change on the application side.
How do you implement an idea in which the physical file attachment is not appended to these tables but instead just a path to that file which is located in another storage drive? Hence, the column's data type could be just a string rather than an image.
This topic first appeared in the Spiceworks Community

Hi,
I'm sorry you are having this problem, here is another post about the same problem, where the cause of the problem is described:
https://support.mozilla.com/en-US/questions/894442
A bug has been filed to track resolution of the issue here, because a true fix isn't yet available:
https://bugzilla.mozilla.org/show_bug.cgi?id=703015
I apologize for the inconvenience.
Regards,
Michelle

Similar Messages

  • Can ui:table deal with large table?

    I have found h:dataTable can do pagination because it's data source is just a DataModel. But ui:table's datasouce is a data provider which looks some complex and confused.
    I have a large table and I want to load the data on-demand . So I try to implement a provider. But soon I found that the ui:table may be load all data from provider always.
    In TableRowGroup.java, there are many code such as:
    provider.getRowKeys(rowCount, null);
    null will let provider load all data.
    So ui:table can NOT deal with large table!?
    thx.
    fan

    But ui:table just uses TableDataProvider interface.TableData provider is a wrapper for the CachedRowSet
    There are two layers between the UI:Table comonent and the database table: the RowSet layer and the Data Provider layer. The RowSet layer makes the connection to the database, executes the queries, and manages the result set. The Data Provider layer provides a common interface for accessing many types of data, from rowsets, to Array objects, to Enterprise JavaBeans objects.
    Typically, the only time that you work with the RowSet object is when you need to set query parameters. In most other cases, you should use the Data Provider to access and manipulate the data.
    What can a CachedRowSet (or CachedRowSetprovider?)
    do?Check out the API that I pointed you to to see what you can do with a CachedRowSet
    Does the Table cache the data itself?
    Maybe this way is convenient for filter and order?
    Thx.I do not know the answers to these questions.

  • Dramatic slowdown with large tables--common problem?

    I opened an excel .xls file in Numbers. The file has one sheet and on that sheet is a table with 45,000 rows and 7 columns, including a date column. I was surprised to experience response times between 30 - 60 seconds for doing tasks like adjusting column widths and formatting the date column. Sorting and filtering also took > 30 seconds, each time I changed something (sort order, filter criteria). There are no formulas in the table at this point--just data.
    I hacked off 3/4 of the rows to get the table below 10,000 rows and did the same tasks. The times were roughly proportionately shorter, but still still pretty long compared to Excel on Windows.
    I'm running a Mac Pro with 2 quad-core processors and six GB of memory, so I don't think the problem is hardware. (When I run the same tasks in Windows on the same machine the response time is instantaneous for all tasks).
    Is this a problem with Numbers that anyone with large tables will experience, or is it likely to be something wrong with my particular installation of Numbers (I'm on the most recent version of Numbers)?

    Hello
    The Terms of use which you MUST have read claims:
    +to help you resolve issues, ask questions, get tips and advice, and more.+
    +If you have a technical question about an Apple product, _be sure to check out Apple's support resources_ first by consulting the application Help menu on your computer and visiting our Support site to view articles and more on our product support pages.+
    +How do I post a question? ‚+
    +_If you searched the forums and didn't find an answer to your question or issue_, click the Post New Topic link at the top of a relevant forum page to post your own question.+
    A quick search with the keyword "speed" would have already tell you that : yes, Numbers is slow !
    Yvan KOENIG (from FRANCE lundi 25 février 2008 20:46:52)

  • With the automatic addition of an update on both home & work computers, attachments went from being identified & opened as appropriate files, even when identified as .doc or .pdf, etc., all become .ashx

    With the automatic addition of an update on both home & work computers, attachments went from being identified & opened as appropriate files, even when identified as .doc or .pdf, etc., all become .ashx (see details below)

    With the automatic addition of an update on both home & work computers, attachments went from being identified & opened as appropriate files, even when identified as .doc or .pdf, etc., all become .ashx (see details below)

  • Error in sync group with large tables

    Since a few days ago the automated sync process configured in windows azure portal is failing. The following message appears
    SqlException Error Code: -2146232060 - SqlError Number:40550, 
    Message: The session has been terminated because it has acquired too many locks. 
    Looking for the error on internet I've found the following post 
    http://blogs.msdn.com/b/sync/archive/2010/09/24/how-to-sync-large-sql-server-databases-to-sql-azure.aspx
    Basically it says that in order to increase the application transaction size it's necessary to include some parameters in the remote and local provider. There's an example script for that.
    But how can i apply this change if my data sync process was created through azure web portal? Is there a way to access the sync scripts? How can I increase the transaction size from azure portal?
    Please, any help is welcome
    Alvaro

    Hi Alvaro,
    I’m afraid that there’s no method to access the sync scripts and increase the transaction size from azure portal when using SQL Azure data sync group.
    The error 40550 occurs when sessions consuming greater than one million locks. You can use the following DMVs to monitor your transactions in SQL Azure. Usually, the solution of this error is to read or modify fewer rows in a single transaction.
    sys.dm_tran_active_transactions
    sys.dm_tran_database_transactions
    sys.dm_tran_locks
    sys.dm_tran_session_transactions
    In your scenario, to overcome the error 40550, I recommend you use the
    bcp utility or SQL Server Integration Services (SSIS) to move data from large table to SQL Azure.
    With bcp utility, you can divide your data into multiple sections and upload each section by executing multiple bcp commands simultaneously. With SSIS, you can divide your data into multiple files on the file system and upload each file by executing multiple
    Streams simultaneously.
    Reference:
    Optimizing Data Access and Messaging - SQL Azure Connection Management
    Thanks,
    Lydia Zhang
    If you have any feedback on our support, please click
    here.
    Lydia Zhang
    TechNet Community Support

  • Oracle database integration with SAP PI for high volume & Complex Structure

    Hi
    We have requirement for integrating oracle database to SAP PI 7.0 for sending data which is eventually transferred to multiple receivers. The involved data structure is hugely complex (around 18 child tables) with high volume processing requirement (100K+ objects need to be processed in 6-7 hours). We need to implement logic for prioritizing the object i.e. high priority objects must be processed first and then objects with normal priority.
    We could think of implementing this kind of logic in database procedures (at least it provides flexibility for implementing data selection logic as well as processed data can be marked as success in the same SP) but since PI sender adapter doesn't support calling Oracle stored procedures currently so this option is rules out. we can try implementing complex data selection using oracle table function but table function doesn't allow any SQL query which changes data (UPDATE, INSERT, DELETE etc) so it is impossible to mark selected objects in table function from PI communication channel "Update Query" option.
    Also, we need to make sure that we are not processing all the objects at once as message size for 20 objects can vary from 100 KB to 15 MB which could really lead to serious performance issues for bigger messages.
    Please share any implementation experience for handling issues:
    1 - Database Integration involving Oracle at sender side
    2 - Complex Data structures
    3 - High Volume Processing
    4 - Controlled data selection from database to contro the message size in PI
    Thanks,
    Panchdev

    Hi,
          We can call the stored procedure using receiver adapter using ccBPM, we can follow different approaches for reading the data in this case.
    a) In this  a ccBPM instance needs to be triggered using some dummy message, after receiving this message the ccBPM can make  a sync call to the Oracle database the store procedure(this can be done using the specific receiver data type strucure), on getting the response message the ccBPM  can then proceed with the further steps.The stored procedure needs to be optimized for improving the performance as the mapping complexity will largely get affected by the structure in which the stored procedure returns the message.Prioritization of the objects can be handled in the stored procedure.
    b) In this a ccBPM instance can first read data from the header level table, then it can make subsequent sync calls to Oracle tables for reading data from the child tables.This approach is less suitable for this interface as the number child tables is big.
    Pravesh.

  • Using workspaces with large tables

    Hello
    I've got a few large tables (6-10GB+) that will have around 500k new rows added on a daily basis as part of an overnight batch job. No rows are ever updated, only inserted or deleted and then re-inserted. I want to stop the process that adds the new rows from being an overnight batch to being a near real time process i.e. a queue will be populated with requests to rebuild the content of these tables for specific parent ids, and a process will consume those requests throughout the day rather than going through the whole list in one go.
    I need to provide views of the data asof a point in time i.e. what was the content of the tables at close of business yesterday, and for this I am considering using workspaces.
    I need to keep at least 10 days worth of data and I was planning to partition the table and drop one partition every day. If I use workspaces, I can see that oracle creates a view in place of the original table and creates a versioned table with the LT suffix - this is the table name returned by DBMSMW.GetPhysicalTableName. Would it be considered bad practice to drop partitions from this physical table as I would do with a non version enabled table? If so, what would be the best method for dropping off old data?
    Thanks in advance
    David

    I've just spotted the workspace manager forum, I'll post there. :-)

  • XSU: Dealing with large tables / large XML files

    Hi,
    I'm trying to generate a XML file from a "large" table (about 7 million lines, 512Mbytes of storage) by means of XSU. I get into "java.lang.OutOfMemoryError" even after raising the heap size up to 1 Gbyte (option -Xmx1024m of the java cmd line).
    For the moment, I'm involved in an evaluation process. But in a near future, our applications are likely to deal with large amount of XML data, (typically hundreds of Mbytes of storage, which means possibly Gbytes of XML code), both in updating/inserting data and producing XML streams from existing data in relationnal DB.
    Any ideas about memory issues regarding XSU? Should we consider to use XMLType instead of "classical" relational tables loaded/unloaded by means of XSU?
    Any hint appreciated.
    Regards,
    /Hervi QUENIVET
    P.S. our environment is Linux red hat 7.3 and Oracle 9.2.0.1 server

    Hi,
    I'm trying to generate a XML file from a "large" table (about 7 million lines, 512Mbytes of storage) by means of XSU. I get into "java.lang.OutOfMemoryError" even after raising the heap size up to 1 Gbyte (option -Xmx1024m of the java cmd line).
    For the moment, I'm involved in an evaluation process. But in a near future, our applications are likely to deal with large amount of XML data, (typically hundreds of Mbytes of storage, which means possibly Gbytes of XML code), both in updating/inserting data and producing XML streams from existing data in relationnal DB.
    Any ideas about memory issues regarding XSU? Should we consider to use XMLType instead of "classical" relational tables loaded/unloaded by means of XSU?
    Any hint appreciated.
    Regards,
    /Hervi QUENIVET
    P.S. our environment is Linux red hat 7.3 and Oracle 9.2.0.1 server Try to split the XML before you process it. You can take look into XMLDocumentSplitter explained in Building Oracle XML Applications Book By Steven Meunch.
    The other alternative is write your own SAX handler and send the chuncks of XML for insert

  • I want to restore the look and feel of Firefox 3.6 etc with large icons for last page next page reread and home page located off the toolbar in the upper left like 3.6

    I want 4.0 to look like 3.6 with large separate icons for last page, next page, reread current page and go to home page, in lieu of the little icons to the right of the
    default navigation toolbar. I tried setting up a separate toolbar but it never reappeared. There should be a way to just restore 3.6 look and feel. Otherwise I will go back to 3.6.

    You can make Firefox 4 look and behave more like Firefox 3.6, for details see http://www.computertechtips.net/64/make-firefox-4-look-like-ff-3-6

  • Database design with some tables with EAV format

    We are designing a database in which most tables follow a straight relational design.  Some tables, however, would benefit from an entity-attribute-value structure.
    Specifically, we want to be able to add new value categories simply by adding a row in a data dictionary table.  At a later point, we want to create cursors where those categories show up as columns.
    Can someone point to tools or generic code that does that job?
    Thank you very much,
    Alex

    You dont need to cursors to show the entities as columns. It can be done using set based code. The concept is called cross tabbing or pivoting
    See this examples
    www.mssqltips.com/sqlservertip/1019/crosstab-queries-using-pivot-in-sql-server/
    https://www.simple-talk.com/sql/t-sql-programming/creating-cross-tab-queries-and-pivot-tables-in-sql/
    Please Mark This As Answer if it solved your issue
    Please Vote This As Helpful if it helps to solve your issue
    Visakh
    My Wiki User Page
    My MSDN Page
    My Personal Blog
    My Facebook Page

  • Tables for Documents&doc. type assigned to Equipment master

    Hi
    I need a table for Documents&d oc. type assigned to Equipment master .I have tried in ITOB, EQUI, DRAW. i could not find this
    Thanks
    Seenu

    hi
    yes you can use object key, it  is the one which relates with the equipment ( object key <b>EQUI</b>)
    regards
    thyagarajan

  • Stored procedure with database link with "from table(...)"

    Hi guys,
    I've been told I can't create views on a database by the design team and so have to use this stored procedure to obtain the values.
    select HAN_ID, HAN_DS, GLOBAL_IN, LOCAL_IN
    from table(cast(ODADMIN.ODP00002_QUERY.Execute001@DBLINK(11312,'EN') as
    ODADMIN.ODP00002_001_Array@DBLINK)) WHERE LOCAL_IN = 'Y';I've been told that it works when you remove the database links (so on the actual database) when you remove the cast part. I've tried it with my link and with/without the cast part but it doesn't work. With the example above I get the error: ORA-00907: missing right parenthesis.
    When I remove the CAST-AS and the additional parenthesis it brings i get the error: ORA-00904: "ODADMIN"."ODP00002_QUERY"."EXECUTE001": invalid identifier
    When I do table( *"* ODADMIN.ODP00002_QUERY.Execute001@MWW_DEV(11312,'EN') *"* )... -- wrapping the call in speech marks I get: ORA-00972: identifier is too long
    Anyone see what's wrong? Thanks for any help.
    Mike

    Hi Ben,
    Asking now. By a view I mean one local to the database; I could create one on APEX but then I use the database link twice instead of just 1.
    His reasoning Ben:
    Firstly, Maintenance. We will have to maintain additional views (at additional code). Secondly, if the view has a JOIN, then you can't update through it (without complexities). If we have to get the View to pass the data to a Procedure (that's a pain). Also, standards..
    All update occur via either a Procedure, or a Base view, across over 1000 tables
    That's the standard, and doing things differently is costly long term
    People will not know how it works, it will have to be explained, maintained..etc.
    If the Application has the Business Rules, then updates via Base Views, that's a more standard way of developing. Also, if you update via this view, you'll update multiple rows in one call, which is in-effficient if only ONE row needs to change. Therefore, single row updates from the Application is more efficient
    The procedure is as follows:
    --SET SERVEROUTPUT ON
    DECLARE
    nPBusLoc       NUMBER(5):=11312;
    sPHanId        VARCHAR2(3):='SB1';
    sPLngId        VARCHAR2(2):='EN';
    sPDesc         VARCHAR2(30);
    sPAllowAlloc   VARCHAR2(1);
    sPShowEnq      VARCHAR2(1);
    sPAllowDel     VARCHAR2(1);
    sPShowScan     VARCHAR2(1);
    sPGlobalLocal  VARCHAR2(1);
    sPReturnCd     VARCHAR2(2);
    sPReturnTx     VARCHAR2(100);                  
    BEGIN
    ODADMIN.ODP00001.getHandlingCodes
                           (nPBusLoc    --  IN   NUMBER
                          ,sPHanId      -- IN   VARCHAR2
                          ,sPLngId      -- IN   VARCHAR2
                          ,sPDesc       -- OUT  VARCHAR2
                          ,sPAllowAlloc -- OUT  VARCHAR2
                          ,sPShowEnq    -- OUT  VARCHAR2
                          ,sPAllowDel   -- OUT  VARCHAR2
                          ,sPShowScan   -- OUT  VARCHAR2
                          ,sPGlobalLocal-- OUT  VARCHAR2
                          ,sPReturnCd   -- OUT  VARCHAR2
                          ,sPReturnTx   -- OUT  VARCHAR2                                    
    DBMS_OUTPUT.PUT_LINE('nPBusLoc                 = '||nPBusLoc              );
    DBMS_OUTPUT.PUT_LINE('sPHanId                  = '||sPHanId               );
    DBMS_OUTPUT.PUT_LINE('sPLngId                  = '||sPLngId               );
    DBMS_OUTPUT.PUT_LINE('sPDesc                   = '||sPDesc                );
    DBMS_OUTPUT.PUT_LINE('sPAllowAlloc             = '||sPAllowAlloc          );
    DBMS_OUTPUT.PUT_LINE('sPShowEnq                = '||sPShowEnq             );
    DBMS_OUTPUT.PUT_LINE('sPAllowDel               = '||sPAllowDel            );
    DBMS_OUTPUT.PUT_LINE('sPShowScan               = '||sPShowScan            );
    DBMS_OUTPUT.PUT_LINE('sPGlobalLocal            = '||sPGlobalLocal         );
    DBMS_OUTPUT.PUT_LINE('sPReturnCd               = '||sPReturnCd            );
    DBMS_OUTPUT.PUT_LINE('sPReturnTx               = '||sPReturnTx            );
    END;
    /Mike
    Edited by: Dird on 27-Aug-2009 01:50

  • Large table for IDoc

    Hello Guru:
    Now the DB table EDID4 is very large ( more than 45 million records ) in our BW system, is there anyway to delete the content of the table but do not harm our BW logic?
    The system will read some data from the table when we try to read the monitor.
    Thank you.
    Best regards,
    Eric

    Hi,
    You can use SAP's archiving (txn SARA) to archive old IDOCs. These can alwasy be restored back into the system if required.
    You can select the IDOCs for archiving based on dates to delete IDOCs which are quite old and are not likely to be used.
    You can use an archiving job to automatically delete the idocs periodically (say idocs older than six months, the job running every week).
    cheers,
    Ajay

  • Usage of formula variable with custom table for values

    Hi,
    I have the following scenario:
    a customer wants a report regarding payments of invoices with a formula in it to calculate interest results. In order to do so they want to be able to define a interest rate wich can be used to calculate te result by multiplying the rate by time and amount (for every line in the query the same rate should be used).
    As a solution I thought it would be easiest to give the key user access to a custom table via a custom transaction and then use the entered rate in the query by selecting it from this table using a formula variable (using an exit).
    I was wondering if anyone has ever used such a solution, or if one thinks this is possible.

    Hi Brock,
    Its possible, but I haven't tried as requirements didn't imply.
    More simpler idea is why don't you use a flat file loading through the IP to Info providers?
    Or even more better option is why don't you adapt to Virtual Info providers?
    For your suggested custom table using custom transaction, we can surely make it. Its possible in SE93. Try this transaction code. It must definitely help.
    Thanks,
    Arun Bala G
    Edited by: Arun Bala G on Jan 11, 2010 2:16 PM

  • Working with large tables - thumbnail size

    Hi,
    I'm working with some oversized tables in IBA. What I usually do is make the table as needed and then use the "Uses thumbnail on page" option in the Layout section of the inspector and adjust the thumbnail size to fit the page as needed. What happened this time, that after the document has been closed and reopened, some tables reset the thumbnail size back to default, which is small. I can't seem to find what's causing this, one table is not behaving like that, although it's been done using the same method. Anyone else ran into the same thing? Any suggestions?
    Thanks in advance.

    Why don't you use a stored proc?
    Why are you ordering it?
    Should I take partial entries in a loop? Yep. Because software isn't perfect. No point in attempting to process the universe when you know it will fail sometime and it is easier to handle smaller failures than large ones (and you won't have to redo everything.)

Maybe you are looking for

  • How to Change the XML data that got stuck up in XI

    Hi, I am executing a scenario which sends the data from HTTP client>Xi->SAP(R/3 4.6).We are queuing the messages in XI, through QoS EOIO.Sender side we have configured the HTTP adapter and on receiver side we have configured RFC adapter.RFC at receiv

  • TS2634 all im getting on my screen is an itunes icon and plug in device cord

    all im getting on my screen is an itunes icon and plug in device cord

  • Steps for completing Asset under Construction

    hi, i have created the AUC  asset class  with line settlement checked. will somebody be kind enough to show me the futher steps involved in customization along with postings and settlement procedure.... regards sayeed

  • IDES ECC 5.0, BW, FI, HR

    Hi, I am new to SAP world and looking for IDES ECC 5 so I can learn SAP better. Please share your IDES ECC 5 with installation documentation. I am willing to pay a price via paypal. Please email me at [email protected] Please help. Thank you so much.

  • Want to make a link in Customer order sample application!

    Want to make a link in Customer order sample application! ( In Reply To : Re: How to send E-mail from customer order sample application ? ) Mar 22, 2004 6:24 AM Reply Is it possible to make a link for upload files from Excel? ( In Reply To : Re: Erro