HotSpot optimization for large amounts of array use

Greetings all,
I'm hoping some of the people using Java for crypto and/or image processing/media file processing might by wandering by to get some input.
We are building a secured storage system for media files, currently using the Twofish algorithm (one of the AES candidates that wasn't selected). We are bulk encrypting files for entry into the storage system, and in that process, I was already using Java for some of the management. Thus I decided to use the JCE framework to get the cryptography working.
I hate to say it, but this Java evangelist is a bit appalled at how slow it seems to go. By suspicion is the array bounds checking in the code, but I haven't gone byte-code snooping yet. I've tried a few of the symmetric algorithms (DES, AES/Rijndael and Twofish) and get the same sort of results. I've also tried the Cryptix library for the Twofish, as the Sun JCE doesn't have Twofish.
In straight C code, a G4-700MHz and a US-III 750 MHz get around the 20-25 MB/s encoding.
In Java, I'm getting about 1.3-1.9 MB/s. I reduced the Java code to be raw crypto without using the JCE, and it didn't make a noticable difference.
That's a BIG gap. Not 10%, not 20%, but about 95% reduction in performance.
The profiling with -Hprof shows about 90% of the time in the crypto routines, and all of it is compiled via HotSpot.
Settings are with java 1.3.1 on the G4, and 1.4.0_01 on the UltraSparc-III.
Settings were -server -Xmx128m -Xms128m
Does the bounds checking impose that high of a penalty?
Reading the book Java Platform Performance (excellent book BTW), Steve Wilson and Jeff Kesselman indicate that the array checking could be optimized out in certain loop constructs in HotSpot, but that preliminary examination showed that this would only benefit a narrow range of developers.
With JMF, JCE, and the increasing capabilities of Java2D and 3D, I expect that this sort of processing will increase dramatically, but only if this level of performance penalty can be removed.
Anything else I might try?
Dallas

Thanks for the reply. I've been looking into it further, but with no real advance. I've implemented a twofish JNI bridge that runs the actual encryption natively, at pretty much native speeds. The array access and twofish cipher does copy all elements locally, and the rest of the cipher is bit manipulation on bytes. I haven't quite gotten deep enough into the actual cipher to see where the performance penalty is being incurred.
Possibly I'm mistaken in thinking it was the array access, but I didn't expect the raw bit twiddling and bitwise operators to be slow.

Similar Messages

  • My iPhone has a large amount of capacity used by other as shown in iTunes when sync.  I can't determine where 4.3 GB of other is from.  Even eliminating all of the categories of music, movies and the like it still uses 4.3 GB for other?

    My iPhone has developed a large amount of "other" as shown on how the capacity is used when sync in iTunes.  Even eliminating songs, ovies and all other sync items it still uses 4.3 GB of other.  This is a relativily new development.  How can I determine the cause and free up this capacity?

    Other includes data such as iPhone settings, contacts, calendar events, Safari bookmarks/cache/cookies, email and email attachments, SMS/MMS/iMessages, notes, app data, Safari cache, cookies and bookmarks, stored passwords, search index, home screen organization, wi-fi network data, etc.  But is should be around 1 GB; 4.3 GB is an indication that something is corrupt and you will have to restore your phone to correct it.
    Before restoring, be sure to import all your photos to your computer and back up your contacts (by syncing them with iCloud or Google, or to a supported program such as Outlook or mac Address Book, or using an app like My Contacts Backup) as contacts are not fully backed up in the iPhone backup.  Once this is done, follow this guide: http://support.apple.com/kb/HT1414, being careful to back up your phone at step 6.  At the end of the process your phone will restart and you will go through the setup screens; during this setup, when given the option choose to Restore from iTunes Backup and restore from the backup you made earlier.  Then connect to iTunes and see how large "Other" is.  If it's still that large, then the corrupt data was contained in the backup you restored from and you will have no choice except to repeat the process, but this time restore your phone as a new iPhone.

  • SharePoint Foundation 2013 Optimization For Large File Transfer?

    We are considering upgrading from  WSS 3.0 to SharePoint Foundation 2013.
    One of the improvements we want to see after the upgrade is a better user experience when downloading large files.  It can be done now, but it is not reliable.
    Our document library consists of mostly average sized Office documents, but it also includes some audio and video files and software installer package zip files ranging from 100MB to 2GB in size.
    I know we can change the settings to "allow" larger than default file downloads but how do we optimize the server setup to make these large file transfers work as seamlessly as possible? More RAM on the SharePoint Foundation server? Other Windows,
    SharePoint or IIS optimizations?  The files will often be downloaded from the Internet, so we will not have control over the download speed.

    SharePoint is capable of sending large files, it is an HTTP stateless system like any other website in that regard. Given your server is sized appropriately for the amount of concurrent traffic you expect, I don't see any special optimizations required.
    Trevor Seward
    Follow or contact me at...
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.
    I see information like this posted warning against doing it as if large files are going to cause your SharePoint server and SQL to crash. 
    http://blogs.technet.com/b/praveenh/archive/2012/11/16/issues-with-uploading-large-documents-on-document-library-wss-3-0-amp-moss-2007.aspx
    "Though SharePoint is meant to handle files that are up to 2 gigs in size, it is not practically feasible and not recommended as well."
    "Not practically feasible" sounds like a pretty dire warning to stay away from large files.
    I had seen some other links warning that large file in the SharePoint database causes problems with fragmentation and large amounts of wasted space that doesn't go away when files are removed or that the server may run out of memory because downloaded
    files are held in RAM.

  • Error in Generating reports with large amount of data using OBIR

    Hi all,
    we hve integrated OBIR (Oracle BI Reporting) with OIM (Oracle Identity management) to generate the custom reports. Some of the custom reports contain a large amount of data (approx 80-90K rows with 7-8 columns) and the query of these reports basically use the audit tables and resource form tables primarily. Now when we try to generate the report, it is working fine with HTML where report directly generate on console but the same report when we tried to generate and save in pdf or Excel it gave up with the following error.
    [120509_133712190][][STATEMENT] Generating page [1314]
    [120509_133712193][][STATEMENT] Phase2 time used: 3ms
    [120509_133712193][][STATEMENT] Total time used: 41269ms for processing XSL-FO
    [120509_133712846][oracle.apps.xdo.common.font.FontFactory][STATEMENT] type1.Helvetica closed.
    [120509_133712846][oracle.apps.xdo.common.font.FontFactory][STATEMENT] type1.Times-Roman closed.
    [120509_133712848][][PROCEDURE] FO+Gen time used: 41924 msecs
    [120509_133712848][oracle.apps.xdo.template.FOProcessor][STATEMENT] clearInputs(Object) is called.
    [120509_133712850][oracle.apps.xdo.template.FOProcessor][STATEMENT] clearInputs(Object) done. All inputs are cleared.
    [120509_133712850][oracle.apps.xdo.template.FOProcessor][STATEMENT] End Memory: max=496MB, total=496MB, free=121MB
    [120509_133818606][][EXCEPTION] java.net.SocketException: Socket closed
    at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:99)
    at java.net.SocketOutputStream.write(SocketOutputStream.java:136)
    at weblogic.servlet.internal.ChunkOutput.writeChunkTransfer(ChunkOutput.java:525)
    at weblogic.servlet.internal.ChunkOutput.writeChunks(ChunkOutput.java:504)
    at weblogic.servlet.internal.ChunkOutput.flush(ChunkOutput.java:382)
    at weblogic.servlet.internal.ChunkOutput.checkForFlush(ChunkOutput.java:469)
    at weblogic.servlet.internal.ChunkOutput.write(ChunkOutput.java:304)
    at weblogic.servlet.internal.ChunkOutputWrapper.write(ChunkOutputWrapper.java:139)
    at weblogic.servlet.internal.ServletOutputStreamImpl.write(ServletOutputStreamImpl.java:169)
    at java.io.BufferedOutputStream.write(BufferedOutputStream.java:105)
    at oracle.apps.xdo.servlet.util.IOUtil.readWrite(IOUtil.java:47)
    at oracle.apps.xdo.servlet.CoreProcessor.process(CoreProcessor.java:280)
    at oracle.apps.xdo.servlet.CoreProcessor.generateDocument(CoreProcessor.java:82)
    at oracle.apps.xdo.servlet.ReportImpl.renderBodyHTTP(ReportImpl.java:562)
    at oracle.apps.xdo.servlet.ReportImpl.renderReportBodyHTTP(ReportImpl.java:265)
    at oracle.apps.xdo.servlet.XDOServlet.writeReport(XDOServlet.java:270)
    at oracle.apps.xdo.servlet.XDOServlet.writeReport(XDOServlet.java:250)
    at oracle.apps.xdo.servlet.XDOServlet.doGet(XDOServlet.java:178)
    at oracle.apps.xdo.servlet.XDOServlet.doPost(XDOServlet.java:201)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
    at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
    at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
    at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292)
    at weblogic.servlet.internal.TailFilter.doFilter(TailFilter.java:26)
    at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:42)
    at oracle.apps.xdo.servlet.security.SecurityFilter.doFilter(SecurityFilter.java:97)
    at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:42)
    at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3496)
    at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
    at weblogic.security.service.SecurityManager.runAs(Unknown Source)
    at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2180)
    at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2086)
    at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1406)
    at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
    at weblogic.work.ExecuteThread.run(ExecuteThread.java:173)
    It seems where the querry processing is taking some time we are facing this issue.Do i need to perform any additional configuration to generate such reports?

    java.net.SocketException: Socket closed
         at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:99)
         at java.net.SocketOutputStream.write(SocketOutputStream.java:136)
         at weblogic.servlet.internal.ChunkOutput.writeChunkTransfer(ChunkOutput.java:525)
         at weblogic.servlet.internal.ChunkOutput.writeChunks(ChunkOutput.java:504)
         at weblogic.servlet.internal.ChunkOutput.flush(ChunkOutput.java:382)
         at weblogic.servlet.internal.CharsetChunkOutput.flush(CharsetChunkOutput.java:249)
         at weblogic.servlet.internal.ChunkOutput.checkForFlush(ChunkOutput.java:469)
         at weblogic.servlet.internal.CharsetChunkOutput.implWrite(CharsetChunkOutput.java:396)
         at weblogic.servlet.internal.CharsetChunkOutput.write(CharsetChunkOutput.java:198)
         at weblogic.servlet.internal.ChunkOutputWrapper.write(ChunkOutputWrapper.java:139)
         at weblogic.servlet.internal.ServletOutputStreamImpl.write(ServletOutputStreamImpl.java:169)
         at com.tej.systemi.util.AroundData.copyStream(AroundData.java:311)
         at com.tej.systemi.client.servlet.servant.Newdownloadsingle.producePageData(Newdownloadsingle.java:108)
         at com.tej.systemi.client.servlet.servant.BaseViewController.serve(BaseViewController.java:542)
         at com.tej.systemi.client.servlet.FrontController.doRequest(FrontController.java:226)
         at com.tej.systemi.client.servlet.FrontController.doPost(FrontController.java:128)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
         at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
         at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
         at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292)
         at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:175)
         at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3498)
         at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
         at weblogic.security.service.SecurityManager.runAs(Unknown Source)
         at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2180)
         at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2086)
         at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1406)
         at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
         at weblogic.work.ExecuteThread.run(ExecuteThread.java:17
    (Please help finding a solution in this issue its in production and we need to ASAP)
    Thanks in Advance
    Edited by: 909601 on Jan 23, 2012 2:05 AM

  • Pull large amounts of data using odata, client API and so takes a long time in project server 2013

    We are trying to pull large amounts of data in project server 2013 using both client API and odata calls, but it seem to take a long time. How is this done
    In project server 2010 we did this creating SQL views in both the reporting database and for list creating a view in the content database. Our IT dept is saying we can't do this anymore. How does a view in Project database or content database create issues?
    As long as we don't add a field in the table. So how's one to do this with creating a view?

    Hello,
    If you are using Project Server 2013 on premise I would recommend using T-SQL against the dbo. schema in the Project Web Database for your reports, this will be far quicker that the APIs. You can create custom objects in the dbo. schema, see the link below:
    https://msdn.microsoft.com/en-us/library/office/ee767687.aspx#pj15_Architecture_DAL
    It is not supported to query the SharePoint content database directly with T-SQL or add any custom objects to the content database.
    Paul
    Paul Mather | Twitter |
    http://pwmather.wordpress.com | CPS |
    MVP | Downloads

  • SharePoint Library for Large Amounts of Engineering Data

    We are currently using traditional project directory folders for large projects with sometimes tens of thousands of documents. 
    We are planning on migrating the data to SharePoint and the path forward in unclear.
    Initially it was recommended to use a library, not numerous folders, to contain the data so that searching of data in improved. 
    That sounded great.  The 1<sup>st</sup> project used to pilot this for other project is divided into 20 different modification packages. 
    A library category was created for MODS with selectable options of the 20 mod package names and “No Defined” (default value). 
    Some data items are shared between more than one MOD so this category can have more than one assignment.
    When we looked at the directory structure in place we found no consistency in folder names, no consistency in directory structure. 
    Many folders have 5 or 6 (or more) levels of subdirectories. 
    Ideally we want no more than 4 or 5 categories of meta data to define all data. 
    Mapping from chaos into a comparatively small number of categories is daunting.
    When searching this forum I find that libraries should be limited to 2,000 items. 
    There are tens of thousands of items in our pilot project. 
    Surely someone somewhere has encountered this organizational problem. 
    I could use some advice from someone who have been there before.

    John,
    The limit of 2000 is not a hard limit, the actual no of items you can store in a list is 30,000,000. however more item would have impact on performance on rendering and lock on the SQL table.
    Also the limit that you have mentioned (2000) is list view threshold limit and  actually it is 5000.
    One important aspect is Boundaries are hard limit, which you cannot exceed and Supported limits are limits based on tests, which can be exceeded but may cause issues.
    Being said that , I would suggest you to check out this link on
    SharePoint Server 2010 capacity management: Software boundaries and limits
    http://technet.microsoft.com/en-us/library/cc262787(v=office.14).aspx
    and explore other ways of optimizing your list
    here are some references that would help you to optimize -
    http://office.microsoft.com/en-us/sharepoint-foundation-help/manage-lists-and-libraries-with-many-items-HA010377496.aspx
    http://technet.microsoft.com/en-us/library/cc262813(v=office.14).aspx
    http://office.microsoft.com/en-us/sharepoint-server-help/sharepoint-lists-v-techniques-for-managing-large-lists-RZ101874361.aspx
    Hope this helps!
    Ram - SharePoint Architect
    Blog - http://www.SharePointDeveloper.in
    Please vote or mark your question answered, if my reply helps you

  • What java collection for large amount of data and user customizable record

    I'm trying to write an application which operates on large amount of data. I want user could customize data structure (record) from different types of variables(float,int,bool,string,enums). These records should be stored in some kind of Array. Size of record: 1-200 variables; size of Array of those records: about 100000 items (one record every second through whole day). I want these data stored in some embedded database (sqlite, hsqldb) - access using simple JDBC. Could you give me some advise how to design thoses data strucures. Sincerely yours :)
    Ok, maybe I give some example. This will be some C++ code.
    I made an interface:
    class ParamI {
    virtual string toString() = 0;
    virtual void addValue( ParamI * ) = 0;
    virtual void setValue( ParamI * ) = 0;
    virtual BYTE getType() = 0;
    Than I made some template class derived from interface ParamI:
    template <class T>
    class CParam : CParamI {
    public:
         void setValue( T val );
         T getValue();
         string toString();
         void setValue( ParamI *src ) {
              if ( itemType == src->getType() ) {
                   CParam<T> ptr = (CParam<T>)src;
                   value = ptr->value;
    private:
         BYTE itemType;
         T value;
    sample constructor of <int> template:
    template<> CParam<int>::CParam() {
         itemType = ParamType::INTEGER;
    This solution makes me possible to write collection of CParamI:
    std::vector<CParamI*> myCollection;
    CParam<int> *pi = new CParam<int>();
    pi->setValue(10);
    myCollection.push_back((CParamI*)pi);
    Is this correct solution?. My main problem is to get data from the collection. I have to check its data type using getType() method of CParamI interface.
    Please could give me some advise, some idea to make it right using java.

    If you have the requirement that you have to be able to configure on the fly, then what I've done in the past is just put everything into data pairs into a list: something along the line of: (<Vector>, <String>), where the Vector would store your data and String would contain a data type. I would then make a checker to validate the input according to the SQL databypes that I want to support on the project. It's not a big deal with the amount of data you are talking about.
    The problem you're going to have is when you try to allow dynamic definition, on the fly, of data being input to a table that has already been defined. Your DB will not support that, unless you just store that data pair--which I do not suggest.

  • How can I edit large amount of data using Acrobat X Pro

    Hello all,
    I need to edit a catalog that contains large amount of data - mainly the product price. Currently I can only export the document into excel file and then paste the new price onto the catalog using Acrobat X Pro one by one, which is extremely time-consuming. I am sure there's a better way to make this faster while keeping the accuracy of the data. Thanks a lot in advance if any one's able to help! 

    Hi Chauhan,
    Yes I am able to edit text/image via tool box, but the thing is the catalog contains more than 20,000 price data and all I can do is deleteing the orginal price info from catalog and replace it with the revised data from excel. Repeating this process over 20,000 times would be a waste of time and manpower... Not sure if I make my situation clear enough? Pls just ask away, I really hope to sort it out, Thanks! 

  • Using Siebel-OPA connector BO mapping for large amount of data

    Hi,
    We plan to use the BO mapping approach to get multiple values from OPA to Siebel, which we plan to store as multiple records in Siebel.
    1. Is it advisable to do so using BO mapping?
    2. Would IO mapping be a better approach, considering the size of data involved?
    Thanks

    nilskil wrote:
    Hi,
    We plan to use the BO mapping approach to get multiple values from OPA to Siebel, which we plan to store as multiple records in Siebel.
    1. Is it advisable to do so using BO mapping?
    2. Would IO mapping be a better approach, considering the size of data involved?
    ThanksFor passing lots of data between OPA and Siebel I would definitely recommend using an IO mapping. You will find it faster and also, the return IO xml will be easier to deal with.
    Cheers
    Frank

  • Email Optimization for Large batch

    I'm sending about 7,000 emails to clients each afternoon.
    This is all opt-in.
    The problem is the time it takes to process the emails out
    of the spool, and the effect it has on other emails from the
    system.
    Regular CF business emails get held up for 30 or 45 minutes
    waiting to get through the spool.
    I have multiple smtp servers so I can send the messages to
    different smtp servers, but I don't know how to assign a higher
    priority to emails that are not in the large batch (get them
    through the spool).
    Any ideas?
    Thanks.

    You could a) dynamically redefine the smtp server to be used
    within each iteration of the loop creating your emails, or b) write
    a cf process which only sends out messages in blocks of 200 every
    30 seconds, or c) a combination of a and b (I would opt for
    'c').

  • Tethered shooting for large amount of people

    Here's a little bit of background to my question. I have mainly used Lightroom for organizing imports from my Canon 6D and 7D along with developing them. The shoot I'm looking for help on, with these questions, is quite different from what I'm accustomed to. I am looking at taking portraits of a large number of people at a school fundraiser for a club where the lighting conditions are the same and the camera is on a tripod shooting with a tethered capture to a laptop. So if anyone could help with the following  questions it would be greatly appreciated, thanks.
    1. Is it possible in tethered capture to have the metadata panel open at the same time and be able to have someone type down their email address for each individual picture?
    2. Following question one, is it possible to email all the photos from the shoot individually to the recipients listed in the metadata?
    3. Can Lightroom automatically print photos after taken from a tethered shoot?
    4. Can Lightroom save specific tags in the metadata from a collection into a text file where they are listed along with the file name?
    A bit of help with any of the questions would be great, haven't attempted anything like this before and frankly haven't found anything that could help in the documentation for Lightroom. I do apologize for any confusing wording, my computer wasn't accessible from where I am so this was typed on a iPhone.

    The tethered capture floating tool panel is not modal - you can still use Lightroom's regular panels and tools while it's open, so working on a file while a session is active is no problem (provided a new shot doesn't arrive and auto-advance is turned on).
    Lightroom cannot directly email images to an address taken from the IPTC metadata, for that you would need either a plugin or an external application. I don't know of one currently available.
    Again, exporting metadata to a text file is a job for a plugin - but this time there are several available (such as http://www.photographers-toolbox.com/products/lrtransporter.php )

  • How do I remove spaces or special characters within a cell for large amounts of data

    Is there any shortcut to remove spaces between words and numbers within a cell?
    Example:
    Current: .5 lt PET (6)
    Need: .5ltPET(6)
    Is there any shortcut to remove special characters between numbers within a cell?
    Example:
    Current: 0--000--000--0
    Need: 00000000

    Thanks Wayne.
    I have been away from using Numbers or Excel for 4-5 years so it is slowly coming back to me. I am get that I need to use the SUBSTITUTE function however I am having trouble with getting it to work.
    My Data
    ST PAULI 12/12 NR
    $27.16
    12oz NR(12)
    0--80660--95937--5
    ST PAULI 4/6/12 NR
    $28.76
    12oz NR(6)
    0--80660--95935--1
    ST PAULI DK 12/12 NR
    $0.00
    12oz NR(12)
    0--000--000--0
    ST PAULI DK 4/6/12 NR
    $28.76
    12oz NR(6)
    0--80660--95945--0
    ST PAULI N/A 4/6/12 NR
    $20.66
    12oz NR(6)
    0--80660--95955--9
    CAYMAN JACK 4/6/12 NR
    $29.12
    12oz NR(6)
    8--15829--01006--8
    CAYMAN JACK 8OZ/12PK CAN
    $23.18
    8oz CAN(12)
    8--15829--01061--7
    TGIF LIIT 10OZ FROZEN POUCH
    $35.80
    10oz POUCH(24)
    8--15829--01043--3
    TGIF MARGARITA 10OZ FROZEN POUCH
    $35.80
    10oz POUCH(24)
    8--15829--01047--1
    TGIF PINA COLADA 10OZ FROZEN POUCH
    $35.80
    10oz POUCH(24)
    8--15829--01045--7
    TGIF STRAWBERRY 10OZ FROZEN POUCH
    $35.80
    10oz POUCH(24)
    8--15829--01042--6
    BALLAST PT BIG EYE IPA 1/2 BBL
    $190.00
    KEG 1984oz (1/2 KEG)
    0--000--000--0
    BALLAST PT BIG EYE IPA 1/6 BBL
    $73.00
    KEG 660.1oz (1/6 KEG)
    0--000--000--0
    BALLAST PT BIG EYE IPA 4/6/12 CAN
    $33.00
    12oz CAN(6)
    6--72438--00052--7
    There are many more but this is enough to show you. I need to remove all spaces from the First and Third Columns. I need to remove all (--) from the fourth. Where do I put in the substitute function and what is source sting, existing-string, new-string, and occurrence.
    Thank You for your help.

  • Exp/Imp alternatives for large amounts of data (30GB)

    Hi,
    I've come into a new role where various test database are to be 'refreshed' each night with cleansed copies of production data. They have been using the Imp/Exp utilities with 10g R2. The export process is ok, but what's killing us is the time it takes to transfer..unzip...and import 32GB .dmp files. I'm looking for suggestions on what we can do to reduce these times. Currently the import takes 4 to 5 hours.
    I haven't used datapump, but I've heard it doesn't offer much benefit when it comes to saving time over the old imp/exp utilities. Are 'Transportable Tablespaces' the next logical solution? I've been reading up on them and could start prototyping/testing the process next week. What else is in Oracle's toolbox I should be considering?
    Thanks
    brian

    Hi,
    I haven't used datapump, but I've heard it doesn't offer much benefit when it comes to saving time over the old imp/exp utilitiesDatapump will be faster for a couple of reasons. It uses direct path to unload the data. DataPump also supports parallel processes, so while one process is exporting metadata, the other processes can be exporting the data. In 11, you can also compress the dumpfiles as you are exporting. (Both data and metadata compression is available in 11, I think metadata compression is available in 10.2). This will remove your zip step.
    As far as transportable tablespace, yes, this is an option. There are some requirements, but if it works for you, all you will be exporting will be the metadata and no data. The data is copied from the source to the target by way of datafiles. One of the biggest requirements is that the tablespaces need to be read only while the export job is running. This is true for both exp/imp and expdp/impdp.

  • Query For Large Amount of Data

    Hello All,
    I apologize in advance if I am not posting this in the right section. I am fairly new to APEX and database designing. My goal is to create an inquiry screen for a database of people.
    I am running APEX 4.2 on 11g. The information is store in 3 tables; Names, Demographics, Address. Each table had a PIN ID column that ties them all together. Each table has almost a million rows in them.
    Currently I have it set up that the person types in the name they want to search and it gets passed into a hidden page item on the next page where there is a report with a select statement based on the page item. Everything works right now however it is slow. I am having a 5-10 second delay before the results come up.
    My question is, is there a better way to set up these tables. What is the best way to make this faster?
    I'm sorry if this is a vague question but any help, or point in the right direction will be greatly appreciated
    Thank You !

    976533 wrote:
    Hello All,Welcome to the forum: please read the FAQ and forum sticky threads (if you haven't done so already), and update your forum profile with a real handle instead of "976533".
    When you have a problem you'll get a faster, more effective response by including as much relevant information as possible upfront. This should include:
    <li>Full APEX version
    <li>Full DB/version/edition/host OS
    <li>Web server architecture (EPG, OHS or APEX listener/host OS)
    <li>Browser(s) and version(s) used
    <li>Theme
    <li>Template(s)
    <li>Region/item type(s) (making particular distinction as to whether a "report" is a standard report, an interactive report, or in fact an "updateable report" (i.e. a tabular form)
    With APEX we're also fortunate to have a great resource in apex.oracle.com where we can reproduce and share problems. Reproducing things there is the best way to troubleshoot most issues, especially those relating to layout and visual formatting. If you expect a detailed answer then it's appropriate for you to take on a significant part of the effort by getting as far as possible with an example of the problem on apex.oracle.com before asking for assistance with specific issues, which we can then see at first hand.
    I apologize in advance if I am not posting this in the right section. I am fairly new to APEX and database designing. My goal is to create an inquiry screen for a database of people.It might be more appropriate to the {forum:id=75} forum, so you should look at the following entries on their FAQ as well:
    <li>{message:id=9360002}
    <li>{message:id=9360003}
    I am running APEX 4.2 on 11g. The information is store in 3 tables; Names, Demographics, Address. Each table had a PIN ID column that ties them all together. Each table has almost a million rows in them.
    Currently I have it set up that the person types in the name they want to search and it gets passed into a hidden page item on the next page where there is a report with a select statement based on the page item. Everything works right now however it is slow. I am having a 5-10 second delay before the results come up.
    My question is, is there a better way to set up these tables. What is the best way to make this faster? Are there suitable indexes on the tables?
    Does the report query use them?
    As described above, either: reproduce the problem on apex.oracle.com; or post DDL to allow us to recreate the tables and indexes, and the SQL from your report.

  • XML-Export Error for large amount of data

    Hi there...
    I have an application process, which runs on demand (Button) and which exports data (from sql query) into a file (.xls).
    The result is being formated and the export works fine as long as the query returns just a small amount of data, approx. 8 to 10 rows.
    As the result is being stored in a clob, I output the data with "htp.prn" in a loop by "cutting" the clob into small pieces (varchar).
    However, as soon as the amount is bigger than the mentioned 8 to 10 rows, I get an error (sqlerrm:ORA-06502: PL/SQL: numeric or value error).
    I guess there must be something wrong with my loop or the way I "cut" the clob into pieces and output them.
    Maybe someone has a hint for me where to look at exactly!?
    Thanks in advance...
    Johnny
    Here is my code (I removed parts of it, which are not important for this issue):
    declare
    l_xml_header varchar2(32767);
    l_xml_body clob;
    l_xml_text varchar2(32767);
    l_xml_footer varchar2(32767);
    runner number;
    clob_size number;
    begin
    runner := 2;
    owa_util.mime_header( 'application/octet', FALSE);
    htp.p('Content-Disposition: attachment; filename="Test.xls"');
    owa_util.http_header_close;
    l_xml_header := '<?xml version="1.0" encoding="utf-8"?>'||chr(10)||
    '<?mso-application progid="Excel.Sheet"?>'||chr(10)||
    '<Workbook xmlns="urn:schemas-microsoft-com:office:spreadsheet"'||chr(10)||
    'xmlns:o="urn:schemas-microsoft-com:office:office"'||chr(10)||
    'xmlns:x="urn:schemas-microsoft-com:office:excel"'||chr(10)||
    'xmlns:ss="urn:schemas-microsoft-com:office:spreadsheet"'||chr(10)||
    'xmlns:html="http://www.w3.org/TR/REC-html40">'||chr(10)||
    '<DocumentProperties xmlns="urn:schemas-microsoft-com:office:office">'||chr(10)||
    '<Version>1.0</Version>'||chr(10)||
    '</DocumentProperties>'||chr(10)||
    '<ExcelWorkbook xmlns="urn:schemas-microsoft-com:office:excel">'||chr(10)||
    '<WindowHeight>8580</WindowHeight>'||chr(10)||
    '<WindowWidth>15180</WindowWidth>'||chr(10)||
    '<WindowTopX>120</WindowTopX>'||chr(10)||
    '<WindowTopY>45</WindowTopY>'||chr(10)||
    '<ProtectStructure>False</ProtectStructure>'||chr(10)||
    '<ProtectWindows>False</ProtectWindows>'||chr(10)||
    '</ExcelWorkbook>'||chr(10)||
    '<Styles>'||chr(10)||
    '<Style ss:ID="Default" ss:Name="Normal">'||chr(10)||
    '<Alignment ss:Vertical="Bottom"/>'||chr(10)||
    '<Borders/>'||chr(10)||
    '<Font ss:FontName="Arial" x:Family="Swiss"/>'||chr(10)||
    '<Interior/>'||chr(10)||
    '<NumberFormat/>'||chr(10)||
    '<Protection/>'||chr(10)||
    '</Style>'||chr(10)||
    '<Style ss:ID="s22">'||chr(10)||
    '<Font x:Family="Swiss" ss:Bold="1"/>'||chr(10)||
    '</Style>'||chr(10)||
    '<Style ss:ID="s67">'||chr(10)||
    '<Font ss:FontName="Arial" x:Family="Swiss" ss:Color="#FFFFFF"/>'||chr(10)||
    '</Style>'||chr(10)||
    '<Style ss:ID="s157">'||chr(10)||
    '<Borders/>'||chr(10)||
    '</Style>'||chr(10)||
    '<Style ss:ID="s158">'||chr(10)||
    '<Borders>'||chr(10)||
    '<Border ss:Position="Right" ss:LineStyle="Continuous" ss:Weight="1"/>'||chr(10)||
    '</Borders>'||chr(10)||
    '</Style>'||chr(10)||
    '</Styles>';
    for z in 1..1
    loop
    l_xml_body:=l_xml_body||'<Worksheet ss:Name="Worksheet1"> <Table x:FullColumns="1" x:FullRows="1" ss:DefaultColumnWidth="60">';
    l_xml_body:=l_xml_body||'<Row><Cell ss:StyleID="s163"><Data ss:Type="String">Colum1</Data></Cell>'||
    '<Cell ss:StyleID="s163"><Data ss:Type="String">Colum2</Data></Cell>'||
    '<Cell ss:StyleID="s163"><Data ss:Type="String">Colum3</Data></Cell>'||
    '<Cell ss:StyleID="s163"><Data ss:Type="String">...</Data></Cell>'||
    '<Cell ss:StyleID="s166"><Data ss:Type="String">ColumN</Data></Cell></Row>';
    for z in (
    select
    a."Col1",
    a."Col2",
    b."Col3",
    b."ColN"
    from table1 a,
    table2 b
    where a.id = b.id
    loop
    l_xml_body := l_xml_body||'<Row><Cell ss:StyleID="s157"><Data ss:Type="String">'||
    z.Col1||'</Data></Cell><Cell ss:StyleID="s157"><Data ss:Type="String">'||
    z.Col2||'</Data></Cell><Cell ss:StyleID="s157"><Data ss:Type="String">'||
    z.Col3||'</Data></Cell><Cell ss:StyleID="s157"><Data ss:Type="String">'||
    ... ||'</Data></Cell><Cell ss:StyleID="s157"><Data ss:Type="String">'||
    z.ColN||'</Data></Cell>';
    l_xml_body := l_xml_body||'</Row>'||chr(10);
    runner := runner + 1;
    end loop;
    l_xml_body := l_xml_body||'</Table>';
    end loop;
    clob_size := dbms_lob.getlength(l_xml_body);
    htp.prn(l_xml_header);
    for i in 1..ceil(clob_size / 32767)
    loop
    l_xml_text := dbms_lob.SUBSTR (l_xml_body, 32767, v_count);
    HTP.prn (l_xml_text);
    v_count := v_count + 32767;
    end loop;
    htp.prn('</Worksheet></Workbook>');
    HTMLDB_APPLICATION.g_unrecoverable_error := TRUE;
    EXCEPTION
    WHEN OTHERS
    THEN
    OWA_UTIL.mime_header ('application/octet', FALSE);
    HTP.prn ('Content-Disposition: attachment; filename="Test.xls"');
    OWA_UTIL.http_header_close;
    HTMLDB_APPLICATION.g_unrecoverable_error := TRUE;
    end;
    #######################################################

    Thanks for the hint Paul,
    here is my code in code-tags.
    I appreciate any help!
    Johnny
    declare
    l_xml_header varchar2(32767);
    l_xml_body clob;
    l_xml_text varchar2(32767);
    l_xml_footer varchar2(32767);
    runner number;
    clob_size number;
    begin
    runner := 2;
    owa_util.mime_header( 'application/octet', FALSE);
    htp.p('Content-Disposition: attachment; filename="Test.xls"');
    owa_util.http_header_close;
    l_xml_header := '<?xml version="1.0" encoding="utf-8"?>'||chr(10)||
    '<?mso-application progid="Excel.Sheet"?>'||chr(10)||
    '<Workbook xmlns="urn:schemas-microsoft-com:office:spreadsheet"'||chr(10)||
    'xmlns:o="urn:schemas-microsoft-com:office:office"'||chr(10)||
    'xmlns:x="urn:schemas-microsoft-com:office:excel"'||chr(10)||
    'xmlns:ss="urn:schemas-microsoft-com:office:spreadsheet"'||chr(10)||
    'xmlns:html="http://www.w3.org/TR/REC-html40">'||chr(10)||
    '<DocumentProperties xmlns="urn:schemas-microsoft-com:office:office">'||chr(10)||
    '<Version>1.0</Version>'||chr(10)||
    '</DocumentProperties>'||chr(10)||
    '<ExcelWorkbook xmlns="urn:schemas-microsoft-com:office:excel">'||chr(10)||
    '<WindowHeight>8580</WindowHeight>'||chr(10)||
    '<WindowWidth>15180</WindowWidth>'||chr(10)||
    '<WindowTopX>120</WindowTopX>'||chr(10)||
    '<WindowTopY>45</WindowTopY>'||chr(10)||
    '<ProtectStructure>False</ProtectStructure>'||chr(10)||
    '<ProtectWindows>False</ProtectWindows>'||chr(10)||
    '</ExcelWorkbook>'||chr(10)||
    '<Styles>'||chr(10)||
    '<Style ss:ID="Default" ss:Name="Normal">'||chr(10)||
    '<Alignment ss:Vertical="Bottom"/>'||chr(10)||
    '<Borders/>'||chr(10)||
    '<Font ss:FontName="Arial" x:Family="Swiss"/>'||chr(10)||
    '<Interior/>'||chr(10)||
    '<NumberFormat/>'||chr(10)||
    '<Protection/>'||chr(10)||
    '</Style>'||chr(10)||
    '<Style ss:ID="s22">'||chr(10)||
    '<Font x:Family="Swiss" ss:Bold="1"/>'||chr(10)||
    '</Style>'||chr(10)||
    '<Style ss:ID="s67">'||chr(10)||
    '<Font ss:FontName="Arial" x:Family="Swiss" ss:Color="#FFFFFF"/>'||chr(10)||
    '</Style>'||chr(10)||
    '<Style ss:ID="s157">'||chr(10)||
    '<Borders/>'||chr(10)||
    '</Style>'||chr(10)||
    '<Style ss:ID="s158">'||chr(10)||
    '<Borders>'||chr(10)||
    '<Border ss:Position="Right" ss:LineStyle="Continuous" ss:Weight="1"/>'||chr(10)||
    '</Borders>'||chr(10)||
    '</Style>'||chr(10)||
    '</Styles>';
    for z in 1..1
    loop
      l_xml_body:=l_xml_body||'<Worksheet ss:Name="Worksheet1"> <Table x:FullColumns="1" x:FullRows="1" ss:DefaultColumnWidth="60">';
      l_xml_body:=l_xml_body||'<Row><Cell ss:StyleID="s163"><Data ss:Type="String">Colum1</Data></Cell>'||
                              '<Cell ss:StyleID="s163"><Data ss:Type="String">Colum2</Data></Cell>'||
                              '<Cell ss:StyleID="s163"><Data ss:Type="String">Colum3</Data></Cell>'||
                              '<Cell ss:StyleID="s163"><Data ss:Type="String">...</Data></Cell>'||
                              '<Cell ss:StyleID="s166"><Data ss:Type="String">ColumN</Data></Cell></Row>';
      for z in (
      select
       a."Col1",
       a."Col2",
       b."Col3",
       b."ColN"
      from table1 a,
           table2 b
      where a.id = b.id
      loop
          l_xml_body := l_xml_body||'<Row><Cell ss:StyleID="s157"><Data ss:Type="String">'||
                            z.Col1||'</Data></Cell><Cell ss:StyleID="s157"><Data ss:Type="String">'||
                            z.Col2||'</Data></Cell><Cell ss:StyleID="s157"><Data ss:Type="String">'||
                            z.Col3||'</Data></Cell><Cell ss:StyleID="s157"><Data ss:Type="String">'||
                             ...  ||'</Data></Cell><Cell ss:StyleID="s157"><Data ss:Type="String">'||
                            z.ColN||'</Data></Cell>';
          l_xml_body := l_xml_body||'</Row>'||chr(10);
          runner := runner + 1;  
    end loop;
        l_xml_body := l_xml_body||'</Table>';
    end loop;
    clob_size           := dbms_lob.getlength(l_xml_body);
    htp.prn(l_xml_header);
    for i in 1..ceil(clob_size / 32767)
    loop
       l_xml_text := dbms_lob.SUBSTR (l_xml_body, 32767, v_count);
       HTP.prn (l_xml_text);
       v_count := v_count + 32767;
    end loop;
    htp.prn('</Worksheet></Workbook>');
    HTMLDB_APPLICATION.g_unrecoverable_error := TRUE;
       EXCEPTION
          WHEN OTHERS
          THEN
             OWA_UTIL.mime_header ('application/octet', FALSE);
             HTP.prn ('Content-Disposition: attachment; filename="Test.xls"');
             OWA_UTIL.http_header_close;
             HTMLDB_APPLICATION.g_unrecoverable_error := TRUE;
    end;

Maybe you are looking for

  • OS2 - Max number of Apps in the Doc

    It has been highlighted that we can drop as many apps in the Doc as we want.....However, if try to add more than 6 apps....it is not allowing me to add more apps... Any idea what needs to be done?

  • Can an Apple store unlock an iPhone 4S?

    We have purchased a second hand iPhone 4s for our daughter and I have been told that the local store (UK) can unlock from Orange, is this right (as my local store is over an hours drive away and I have no other reason to travel there) I could contact

  • Error during ECATT recording ME31L

    Hi, I encountered some problems during recording with ECATT. I tried to record transaction ME31L with SAPGUI command. When I run transaction ME31L without recording, at the end I receive the message with generated document number: "Direct Shipment RM

  • Snmpwalk cmd in lms version 4.0

    Hi Team, How to do an snmpwalk to an interface of a cisco7600 router with ip addr. 172.16.1.1 , snmp ro string = cisco. I want to see if interface utilization Rx and Tx values are coming from this router to LMS server properly., becaus im not getting

  • Personalized mail forms restriction to users

        Dears, Do we have an option to restrict certain mailforms to few users ? Say i have 10 mail forms created and i want to make sure that only 5 can be accessed via a certain business role or via a User group ? Any badi or authorisation object ? Tha