What is the best approach to return Large data from Stored Procedure ?

no answers to my original post, maybe better luck this time, thanks!
We have a stored proc (Oracle 8i) that:
1) receives some parameters.
2)performs computations which create a large block of data
3) return this data to caller.
compatible with both ASP (using MSDAORA.Oracle), and ColdFusion (using Oracle ODBC driver). This procedure is critical in terms of performance.
I have written this procedure as having an OUT param which is a REF CURSOR to a record containing a LONG. In order to make this work, at the end of the procedure I have to store the working buffer (an internal LONG variable) into a temp table, and then open the cursor as a SELECT from the temp table.
I have tried to open the cursor as a SELECT of the working buffer (from dual) but I get the error "ORA-01460: unimplemented or unreasonable conversion requested"
I suspect this is taking too much time; any tips about the best approach here? is there a resource with REAL examples on returning large data?
If I switch to CLOB, will it speed the process, be compatible with callers, etc ? all references to CLOB I saw use trivial examples.
Thanks for any help,
Yoram Ayalon

Create a new farm in the secondary Data Center at the same patch level with the desired configuration. Replicate the databases using the method of choice (Mirroring, AlwaysOn, etc.). Create a downtime window where you can then attach the databases to the
new farm's Web Application(s)/Service Application(s).
Trevor Seward
Follow or contact me at...
&nbsp&nbsp
This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

Similar Messages

  • What is the best way to transfer my date from my old iMac to my new iMac?

    what is the best way to transfer my date from my old iMac to my new iMac? I don't have a time capsule, so I cannot use time machine.
    thanks

    Your best bet by far is to use Setup Assistant when the new Mac first starts up to transfer directly from the old one. 
    It's similar to Migration Assistant, but doing it that way will avoid having an extra user account.
    Either way, you can connect them via FireWire, Thunderbolt, or your network, but avoid wireless if at all possible.  You can connect both to your netwrork via Ethernet, or put a single cable between the two. 
    See Using Setup Assistant on Mountain Lion or Lion for details.

  • Purchasing new iMac (27 i5), what is best method to transfer programs/files/documents from 24" iMAC?  What is the best method to remove personal data from OS?

    Purchasing a new iMac (27 i5), what is the best method to transfer programs/files,documents from a 24" iMac?
    What is the best method to remove personal data from the 24" iMac?

    Use setup assistant which is offered when you setup your new Mac.  It will transfer information from a Time Machine backup, Clone or another Mac.
    It's best to do this during setup to avoid issues with duplicate IDs.
    Regards

  • What is the best approach to generate control numbers from bpel?

    1. If we want to control ISA/GS/ST control numbers from bpel, what is the best approach to do that?
    2.  how to generate these control numbers and where to store them to get a sequence out of it?
    Thanks,
    Kathar

    Internally Oracle B2B uses DB sequence for generating the control numbers. It is the best approach but at the same time it is not very straight forward, specially in case of clustered database. So you may carefully implement same with BPEL.
    Hi Anuj,
    If we let B2B to generate control numbers in the clustered environment, is there any settings we have to do?
    So you may carefully implement same with BPEL. BTW, what is the use case behind this?
    We were thinking about using this to send out duplicate messages to two TPs but we decided to go with java callout as you suggested in another thread.
    Thanks!
    Kathar

  • Best approach to return Large data ( 4K) from Stored Proc

    We have a stored proc (Oracle 8i) that:
    1) receives some parameters.
    2)performs computations which create a large block of data
    3) return this data to caller.
    compatible with both ASP (using MSDAORA.Oracle), and ColdFusion (using Oracle ODBC driver). This procedure is critical in terms of performance.
    I have written this procedure as having an OUT param which is a REF CURSOR to a record containing a LONG. In order to make this work, at the end of the procedure I have to store the working buffer (an internal LONG variable) into a temp table, and then open the cursor as a SELECT from the temp table.
    I have tried to open the cursor as a SELECT of the working buffer (from dual) but I get the error "ORA-01460: unimplemented or unreasonable conversion requested"
    I suspect this is taking too much time; any tips about the best approach here? is there a resource with REAL examples on returning large data?
    If I switch to CLOB, will it speed the process, be compatible with callers, etc ? all references to CLOB I saw use trivial examples.
    Thanks for any help,
    Yoram Ayalon

    We have a stored proc (Oracle 8i) that:
    1) receives some parameters.
    2)performs computations which create a large block of data
    3) return this data to caller.
    compatible with both ASP (using MSDAORA.Oracle), and ColdFusion (using Oracle ODBC driver). This procedure is critical in terms of performance.
    I have written this procedure as having an OUT param which is a REF CURSOR to a record containing a LONG. In order to make this work, at the end of the procedure I have to store the working buffer (an internal LONG variable) into a temp table, and then open the cursor as a SELECT from the temp table.
    I have tried to open the cursor as a SELECT of the working buffer (from dual) but I get the error "ORA-01460: unimplemented or unreasonable conversion requested"
    I suspect this is taking too much time; any tips about the best approach here? is there a resource with REAL examples on returning large data?
    If I switch to CLOB, will it speed the process, be compatible with callers, etc ? all references to CLOB I saw use trivial examples.
    Thanks for any help,
    Yoram Ayalon

  • URGENT : Return Bulk data from Stored Procedure

    Hi,
    Tell me, how do I return a bulk of data which
    does not exist in the data base
    but is concluded while the Stored Procedure is executed
    from the Stored procedure
    to the C++ program.
    For Example:
    Table ABC
    Field1 Field2 Field3
    A 1 3
    B 1 5
    C 2 10
    Table DEF
    Field1 Field2 Field3
    D 10 24
    E 3 16
    F 8 19
    SP_TESTING
    Depending on the values in both the tables
    for some range of conditions,
    a conclusion X is derived for each range value of the
    condition range.
    Now I need to return this bulk of data X with the
    condition they belong to
    back to the C++ code calling it....
    NOTE : A stored procedure is requited as there is a lot
    of processing
    required before we conclude the result X for each value
    in the condition range.
    If I execute this code from C++ instead of Stored
    procedure
    it is very slow and speed is a prime requirement of my
    system.
    Also i'm not using any MFC class to access database.
    I'm using ConnectionPtr, RecordsetPtr and _CommandPtr
    from msado15.dll for database access...
    One solution to this could be use of Temp tables.
    As this process is used by a lot of different stored
    procedures having a common
    temp table to all will need something like 50 NUMERIC
    fields, 50 VARCHAR fields
    and so on, which doesn't seem like a very good solution
    to this problem.
    Sounds like something I would have done while in school,
    implement a dumb solution.
    So, please suggest me a solution as to how do I return
    bulk data in the form
    of recordsets from stored procedure.
    Regards
    Shruti

    Use Out parameter mode
    SQL> CREATE OR REPLACE procedure a1 (x  OUT NUMBER, y  OUT NUMBER) AS
      2  BEGIN
      3        x:= 1;
      4        y:= 2;
      5  END;
      6  .
    SQL> /
    Procedure created.
    SQL> SET SERVEROUTPUT ON
    SQL> DECLARE
      2   a NUMBER :=3;
      3   b NUMBER :=4;
      4  BEGIN
      5   a1 (a,b);
      6      DBMS_OUTPUT.PUT_LINE( 'a = ' || a );
      7      dbms_output.put_line( 'b = ' || b );
      8  END;
      9  .
    SQL> /
    a = 1
    b = 2
    PL/SQL procedure successfully completed.By default parameters are copied to the OUT parameter mode .
    COPY hint in PLSQL don’t send a pointer to calling program unit but NOCOPY
    does.
    Khurram

  • What's the best approach to resetting Calendar data on Server?

    I have a database format error in a calendar that I only noticed after the migration to Server on Yosemite.  I'll paste a snippet from the Error Log in at the bottom that shows the error - I've highlighted the description of the problem in red.
    I found a pretty cool writeup from Linc in a different thread, but it's aimed at fixing a similar problem for a local user on their own machine rather that an iCal server like what we're running.  Here's the link to that thread: Re: Calendar crashes on open  For example, does something like Calendar Cleaner work on our server database as well?
    In my case I think I'd basically like to gracefully remove all the Calendar databases from Server and start fresh (all the users' calendars are backed up on their local machines, so they can just import them into fresh/empty calendars once I've cleaned out the old stuff).  Any thoughts on "best approach" would be much appreciated.
    Here's the error log...
    File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/twi sted/internet/defer.py", line 1099, in _inlineCallbacks
    2015-01-31 07:14:41-0600 [-] [caldav-0]         result = g.send(result)
    2015-01-31 07:14:41-0600 [-] [caldav-0]       File "/Applications/Server.app/Contents/ServerRoot/Library/CalendarServer/lib/python 2.7/site-packages/txdav/caldav/datastore/sql.py", line 3635, in component
    2015-01-31 07:14:41-0600 [-] [caldav-0]         e, self._resourceID
    2015-01-31 07:14:41-0600 [-] [caldav-0]     txdav.common.icommondatastore.InternalDataStoreError: Data corruption detected (Invalid property: GEO:33.4341666667\\;-112.008055556
    2015-01-31 07:14:41-0600 [-] [caldav-0]     BEGIN:VCALENDAR
    2015-01-31 07:14:41-0600 [-] [caldav-0]     VERSION:2.0
    2015-01-31 07:14:41-0600 [-] [caldav-0]     CALSCALE:GREGORIAN
    2015-01-31 07:14:41-0600 [-] [caldav-0]     PRODID:-//Apple Inc.//Mac OS X 10.8.2//EN
    2015-01-31 07:14:41-0600 [-] [caldav-0]     BEGIN:VEVENT
    2015-01-31 07:14:41-0600 [-] [caldav-0]     UID:[email protected]
    2015-01-31 07:14:41-0600 [-] [caldav-0]     DTSTART:20121114T215900Z
    2015-01-31 07:14:41-0600 [-] [caldav-0]     DTEND:20121114T232700Z
    2015-01-31 07:14:41-0600 [-] [caldav-0]     CLASS:PUBLIC
    2015-01-31 07:14:41-0600 [-] [caldav-0]     CREATED:20121108T123850Z
    2015-01-31 07:14:41-0600 [-] [caldav-0]     DESCRIPTION:Flight leg 2 of 2 for trip from MSP to LAX\\nhttp://www.google.
    2015-01-31 07:14:41-0600 [-] [caldav-0]      com/search?q=US+29+flight+status\\nBooked on November 8\\, 2012\\n
    2015-01-31 07:14:41-0600 [-] [caldav-0]     DTSTAMP:20121114T210756Z
    2015-01-31 07:14:41-0600 [-] [caldav-0]     GEO:33.4341666667\\;-112.008055556
    2015-01-31 07:14:41-0600 [-] [caldav-0]     LAST-MODIFIED:20121108T123850Z
    2015-01-31 07:14:41-0600 [-] [caldav-0]     LOCATION:Sky Harbor International Airport\\, Phoenix\\, AZ
    2015-01-31 07:14:41-0600 [-] [caldav-0]     SEQUENCE:0
    2015-01-31 07:14:41-0600 [-] [caldav-0]     STATUS:CONFIRMED
    2015-01-31 07:14:41-0600 [-] [caldav-0]     SUMMARY:US 29 from PHX to LAX
    2015-01-31 07:14:41-0600 [-] [caldav-0]     URL:http://www.hipmunk.com/flights/MSP-to-LAX#!dates=Nov14,Nov17&group=1&s
    2015-01-31 07:14:41-0600 [-] [caldav-0]      elected_flights=96f6fbfd91,be8b5c748d;kind=flight&locations=MSP,LAX&dates=
    2015-01-31 07:14:41-0600 [-] [caldav-0]      Nov14,Nov16&group=1&selected_flights=96f6fbfd91,
    2015-01-31 07:14:41-0600 [-] [caldav-0]     END:VEVENT
    2015-01-31 07:14:41-0600 [-] [caldav-0]     BEGIN:X-CALENDARSERVER-PERUSER
    2015-01-31 07:14:41-0600 [-] [caldav-0]     UID:[email protected]
    2015-01-31 07:14:41-0600 [-] [caldav-0]     X-CALENDARSERVER-PERUSER-UID:D0737009-CBEE-4251-A288-E6FCE5E00752
    2015-01-31 07:14:41-0600 [-] [caldav-0]     BEGIN:X-CALENDARSERVER-PERINSTANCE
    2015-01-31 07:14:41-0600 [-] [caldav-0]     TRANSP:OPAQUE
    2015-01-31 07:14:41-0600 [-] [caldav-0]     BEGIN:VALARM
    2015-01-31 07:14:41-0600 [-] [caldav-0]     ACKNOWLEDGED:20121114T210756Z
    2015-01-31 07:14:41-0600 [-] [caldav-0]     ACTION:AUDIO
    2015-01-31 07:14:41-0600 [-] [caldav-0]     ATTACH:Basso
    2015-01-31 07:14:41-0600 [-] [caldav-0]     TRIGGER:-PT2H
    2015-01-31 07:14:41-0600 [-] [caldav-0]     UID:040C4AB7-EF30-4F0C-9D46-6A85C7250444
    2015-01-31 07:14:41-0600 [-] [caldav-0]     X-APPLE-DEFAULT-ALARM:TRUE
    2015-01-31 07:14:41-0600 [-] [caldav-0]     X-WR-ALARMUID:040C4AB7-EF30-4F0C-9D46-6A85C7250444
    2015-01-31 07:14:41-0600 [-] [caldav-0]     END:VALARM
    2015-01-31 07:14:41-0600 [-] [caldav-0]     END:X-CALENDARSERVER-PERINSTANCE
    2015-01-31 07:14:41-0600 [-] [caldav-0]     END:X-CALENDARSERVER-PERUSER
    2015-01-31 07:14:41-0600 [-] [caldav-0]     END:VCALENDAR
    2015-01-31 07:14:41-0600 [-] [caldav-0]     ) in id: 3405
    2015-01-31 07:14:41-0600 [-] [caldav-0]    
    2015-01-31 07:16:39-0600 [-] [caldav-1]  [-] [txdav.common.datastore.sql#error] Transaction abort too long: PG-TXN</Applications/Server.app/Contents/ServerRoot/Library/CalendarServer/lib/ python2.7/site-packages/calendarserver/tools/purge.py#1032$_cancelEvents>, Statements: 5, IUDs: 0, Statement: None
    2015-01-31 08:08:40-0600 [-] [caldav-1]  [AMP,client] [calendarserver.tools.purge#warn] Cleaning up future events for principal A95C9DB2-9757-46B2-ADF6-4DECE2728820 since they are no longer in directory
    2015-01-31 08:09:10-0600 [-] [caldav-1]  [-] [twext.enterprise.jobqueue#error] JobItem: 39, WorkItem: 762001 failed: ERROR:  canceling statement due to statement timeout
    2015-01-31 08:09:10-0600 [-] [caldav-1]    
    2015-01-31 08:13:40-0600 [-] [caldav-1]  [-] [txdav.common.datastore.sql#error] Transaction abort too long: PG-TXN</Applications/Server.app/Contents/ServerRoot/Library/CalendarServer/lib/ python2.7/site-packages/calendarserver/tools/purge.py#1032$_cancelEvents>, Statements: 5, IUDs: 0, Statement: None

    <facepalm>  Well, there you go.  It turns out I was over-thinking this.  The Calendar app on a Mac can manage this database just fine.  Sorry about that.  There may be an easier way to do this, but here's how I did it.
    Use the Calendar.app on a local computer to:
    - Export the corrupted calendar to an ICS file on the local computer (Calendar -> File -> Export -> Export)
    - Create a new local calendar (Calendar -> File -> New Calendar -> On My Mac)
    - Import the corrupted calendar into the new/empty local calendar (Calendar -> File -> Import...)
    - Delete years and years of old events, including the one that was triggering that error message
    - Export the (now much smaller) local calendar to another ICS file on my computer (Calendar -> File -> Export -> Export)
    - Create a new calendar on the server (Calendar -> File -> New Calendar -> in the offending server-based iCal account)
    - Import the edited/fixed/smaller/no-longer-corrupted calendar into the new/empty server calendar (Calendar -> File -> Import...)
    - Make the newly-created iCal calendar the primary calendar (drag it to the top of the list of calendars on the server)
    - Delete the old/corrupted calendar (right-clicking on the bad calendar in the calendar list - you can only delete it once it's NOT the primary calendar any more)

  • What is the best way to remove personal data from the hard drive in preparation for selling my old Mac?

    I am going to sell or perhaps donate my old Power Mac G5.  Is there an easier, yet comprehensive way to remove my data from the hard-drive?  My best approach is dumping anything I do not want found into the trash.
    I am running Mac OS 10.5.8 on the powermac G5

    For better security than the simple Erase (which does not actually over-write the data blocks, only clears the directory), choose Security Options, and Zero all data, one pass.
    After a Zero all data, the only way to recover the data is to dis-assemble the drive in a clean room and use expensive test gear to recover a little of the data. That is good for all but Military Secrets.
    If you really trust no one under any circumstances, remove the drive and beat it to death with a hammer.

  • What is the best approach to migrate SharePoint farm from one data center to other datacenter

    We have two web front server and one application server and two instances for database server and we have to migrate complete farm from one data center to other data center with minimal downtime and end user impact.
    Please provide your best input on this.
    Thanks in advance.

    Create a new farm in the secondary Data Center at the same patch level with the desired configuration. Replicate the databases using the method of choice (Mirroring, AlwaysOn, etc.). Create a downtime window where you can then attach the databases to the
    new farm's Web Application(s)/Service Application(s).
    Trevor Seward
    Follow or contact me at...
    &nbsp&nbsp
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

  • What's the best way to read/write data from a file (preferabl​y a *.txt file)?

    As in the title.  l have revived a couple of old VIs to read and write three numbers and a 1D array of clusters to/from a *.txt file.  The functionality is not very user friendly, and it would also be useful if one could open the text files (manually - not through LabVIEW) and still be able to see/understand what was there.
    I was wondering if anyone would be able to come up with a more efficient and/or user friendly method (compatible with lv6.1) 
    James
    Never say "Oops." Always say "Ah, interesting!"
    Attachments:
    Read Spec.vi ‏110 KB
    Write Spec.vi ‏58 KB

    My primary goal is to have something that works and is easy and comprehensive to operate.  Generating a human-readable file is just a bonus but would be nice if it could be achieved.
    I enclose pictures of the initial file dialog (for both loading and saving the data - referred to as Spec(s) from hence forth), and of the front panel screen seen when  a) loading a spec and  b) saving a spec.  In the file dialog, you have to already know the exact string to input else you'll just be told the file doesn't exist (applies for both loading and saving).  When saving a spec, you cannot see any files previously saved, nor even any previous specs saved within the file.  This means that one can unwittingly overwrite specs without realising it.
    I'm not sure if I've explained this very well, but the current functionality means that far too much can go wrong.  Additionally, if you forget the file name, you'll have no way of knowing what it should be (the data files are stored on a 'locked' part of our network accessible only by Admin or through the LabVIEW executable
    Never say "Oops." Always say "Ah, interesting!"
    Attachments:
    File Dialog.JPG ‏23 KB
    Select The Required Test Spec.JPG ‏10 KB
    Name of specification.JPG ‏6 KB

  • What are the best approaches for mapping re-start in OWB?

    What are the best approaches for mapping re-start in OWB?
    We are using OWB repository 10.2.0.1.0 and OWB client 10.2.0.1.31. The Oracle version is 10 G (10.2.0.3.0). OWB is installed on Linux.
    We have number of mappings. We built process flows for mappings as well.
    I like to know, what are the best approches to incorportate re-start options in our process. ie a failure of mapping in process flow.
    How do we re-cycle failed rows?
    Are there any builtin features/best approaches in OWB to implement the above?
    Does runtime audit tables help us to build re-start process?
    If not, do we need to maintain our own tables (custom) to maintain such data?
    How did our forum members handled above situations?
    Any idea ?
    Thanks in advance.
    RI

    Hi RI,
    How many mappings (range) do you have in a process flows?Several hundreds (100-300 mappings).
    If we have three mappings (eg m1, m2, m3) in process flow. What will happen if m2 fails?Suppose mappings connected sequentially (m1 -> m2 -> m3). When m2 fails then processflow is suspended (transition to m3 will not be performed). You should obviate cause of error (modify mapping and redeploy, correct data, etc) and then repeat m2 mapping execution from Workflow monitor - open diagram with processflow, select mapping m2 and click button Expedite, choose option Repeat.
    In re-start, will it run m1 again and m2 son on, or will it re-start at row1 of m2?You can specify restart point. "at row1 of m2" - I don't understand what you mean (all mappings run in Set based mode, so in case of error all table updates will rollback,
    but there are several exception - for example multiple target tables in mapping without corelated commit, or error in post-mapping - you must carefully analyze results of error).
    What will happen if m3 fails?Process is suspended and you can restart execution from m3.
    By having without failover and with max.number of errors=0, you achieve re-cycle failed rows to zero (0).This settings guarantee existence only two return result of mapping - SUCCSES or ERROR.
    What is the impact, if we have large volume of data?In my opinion for large volume Set based mode is the prefered processing mode of data processing.
    With this mode you have full range enterprise features of Oracle database - parallel query, parallel DML, nologging, etc.
    Oleg

  • What is the best way of returning group-by sql results in Toplink?

    I have many-to-many relationship between Employee and Project; so,
    a Employee can have many Projects, and a Project can be owned by many Employees.
    I have three tables in the database:
    Employee(id int, name varchar(32)),
    Project(id int, name varchar(32)), and
    Employee_Project(employee_id int, project_id int), which is the join-table between Employee and Project.
    Now, I want to find out for each employee, how many projects does the employee has.
    The sql query that achieves what I want would look like this:
    select e.id, count(*) as numProjects
    from employee e, employee_project ep
    where e.id = ep.employee_id
    group by e.id
    Just for information, currently I am using a named ReadAllQuery and I write my own sql in
    the Workbench rather than using the ExpressionBuilder.
    Now, my two questions are :
    1. Since there is a "group by e.id" on the query, only e.id can appear in the select clause.
    This prevent me from returning the full Employee pojo using ReadAllQuery.
    I can change the query to a nested query like this
    select e.eid, e.name, emp.cnt as numProjects
    from employee e,
    (select e_inner.id, count(*) as cnt
    from employee e_inner, employee_project ep_inner
    where e_inner.id = ep_inner.employee_id
    group by e_inner.id) emp
    where e.id = emp.id
    but, I don't like the complication of having extra join because of the nested query. Is there a
    better way of doing something like this?
    2. The second question is what is the best way of returning the count(*) or the numProjects.
    What I did right now is that I have a ReadAllQuery that returns a List<Employee>; then for
    each returned Employee pojo, I call a method getNumProjects() to get the count(*) information.
    I had an extra column "numProjects" in the Employee table and in the Employee descriptor, and
    I set this attribute to be "ReadOnly" on the Workbench; (the value for this dummy "numProjects"
    column in the database is always 0). So far this works ok. However, since the numProjects is
    transient, I need to set the query to refreshIdentityMapResult() or otherwise the Employee object
    in the cache could contain stale numProjects information. What I worry is that refreshIdentityMapResult()
    will cause the query to always hit the database and beat the purpose of having a cache. Also, if
    there are multiple concurrent queries to the database, I worry that there will be a race condition
    of updating this transient "numProjects" attribute. What are the better way of returning this kind
    of transient information such as count(*)? Can I have the query to return something like a tuple
    containing the Employee pojo and an int for the count(*), rather than just a Employee pojo with the
    transient int inside the pojo? Please advise.
    I greatly appreciate any help.
    Thanks,
    Frans

    No I don't want to modify the set of attributes after TopLink returns it to me. But I don't
    quite understand why this matters?
    I understand that I can use ReportQuery to return all the Employee's attributes plus the int count(*)
    and then I can iterate through the list of ReportQueryResult to construct the Employee pojo myself.
    I was hesitant of doing this because I think there will be a performance cost of not being able to
    use lazy fetching. For example, in the case of large result sets and the client only needs a few of them,
    if we use the above aproach, we need to iterate through all of them and wastefully create all the Employee
    pojos. On the other hand, if we let Toplink directly return a list of Employee pojo, then we can tell
    Toplink to use ScrollableCursor and to fetch only the first several rows. Please advise.
    Thanks.

  • What is the best approach to take daily backup of application from CQ5 Server ?

    Hello,
    How to maintain daily backup to maintain the data from server.
    What is the best approach.
    Regards,
    Satish Sapate.

    Linking shared from ryan should give enough information. 
    If case backing up large repository you may know Data Store store holds large binaries and are only stored once. To reduce the backup time remove the datastore from the backup by following [1] (CQ 5.3 example)
    [1] In order to remove the datastore from the backup you will need to do the following:
    Assuming your repository is under /website/crx/repository and you want to move your datastore to /website/crx/datastore
        stop the crx instance
        mv /website/crx/repository/shared/repository/datastore /website/crx/
        Then modify repository.xml by adding the new path configuration to the DataStore element.
    Before:
    <DataStore class="org.apache.jackrabbit.core.data.FileDataStore">
    <param name="minRecordLength" value="4096"/>
    </DataStore>
    After:
    <DataStore class="org.apache.jackrabbit.core.data.FileDataStore">
    <param name="path" value="/website/crx/datastore"/>
    <param name="minRecordLength" value="4096"/>
    </DataStore>
    After doing this then you can safely run separate backups on the datastore while the system is running without affecting performance very much.
    Following our example, you could use rsync to backup the datstore:
    rsync --av --ignore-existing /website/crx/datastore /website/backup/datastore

  • What's the best approach for handeling about 1300 connections in Oracle.

    What's the best approach for handling about 1300 connections in Oracle 9i/10g through a Java application?
    1.Using separate schema s for various type users(We can store only relevant data with a particular schema.     Then No. of records per table can be reduced by replicating tables but we have to maintain all data with a another schema     Then we need update two schema s for a given session.Because we maintain separate scheama for a one user and another schema for all data and then there may be Updating problems)
    OR
    2. Using single schema for all users.
    Note: All users may access the same tables and there may be lot of records than previous case.
    What is the Best case.
    Please give Your valuable ideas

    It is a true but i want a solution from you all.I want you to tell me how to fix my friends car.

  • What is the Best way to move large mailboxes between datacenters?

    What is the Best way to move large mailboxes between datacenters?

    Hi, 
     Are you asking with regards to on-premises Exchange? With Microsoft Online SaaS services (aka Exchange Online) there is no control and no need to control which data center a mailbox resides in.
     With regard to on-premises Exchange, you have two choices: you can move it over the WAN in which case you would either do a native mailbox move (assuming you have Exchange 2010 or later you can suspend the move after the copy so you can control the
    time of the cutover) or create a database copy in the second data center and once the database copies have synchronized change the active copy.
    The other choice is to move is out of band which would usually involve an offline seed of the database (you could conceivably move via PST file but that would disrupt access to the mailbox and is not really the 'best way').
    In general, Exchange on-premises questions are best asked on the Exchange forum: http://social.technet.microsoft.com/Forums/office/en-US/home?category=exchangeserver
    Thanks,
    Guy 

Maybe you are looking for