JSP and large amounts of data

Hello fellow Java fans
First, let me point out that I'm a big Java and Linux fan, but somehow I ended up working with .NET and Microsoft.
Right now my software development team is working on a web tool for a very important microchips manufacturer company. This tool handles big amounts of data; some of our online reports generates more that 100.000 rows which needs to be displayed on a web client such as Internet Explorer.
We make use of Infragistics, which is a set of controls for .NET. Infragistics allows me to load data fetched from a database on a control they call UltraWebGrid.
Our problem comes up when we load large amounts of data on the UltraWebGrid, sometimes we have to load 100.000+ rows; during this loading our IIS server memory gets killed and could take up to 5 minutes for the server to end processing and display the 100.000+ row report. We already proved the database server (SQL Server) is not the problem, our problem is the IIS web server.
Our team is now considering migrating this web tool to Java and JSP. Can you all help me with some links, information, or past experiences you all have had with loading and displaying large amounts of data like the ones we handle on JSP? Help will be greatly appreciated.

Who in the world actually looks at a 100,000 row report?
Anyway if I were you and I had to do it because some clueless management person decided it was a good idea... I would write a program in something that once a day, week, year or whatever your time period produced the report (in maybe a PDF fashion but you could do it in HTML if you really must have it that way) and have it as a static file that you link to from your app.
Then the user will have to just wait while it downloads but the webserver or web applications server will not be bogged down trying to produce that monstrosity.

Similar Messages

  • Real time application and large amount of data

    Hi,
    I've a real-time application that needs to allocate large amount of data in memory (more than 5GB). I'm using 1.6 version of JRE and I'm planning to migrate to 1.8 because the application experiment a lot of "Stop of the world" every day. I spent a lot of time analysing and testing all the parameters and policies of the GC.
    The question is, with the 1.8 version dissapears the pauses on the application caused by the GC cleaning process?
    Thanks.

    Just noting that the GC only really needs to do something when objects are created and then are no longer in use.  So if your application needs a large amount of memory then keeping it, rather than discarding it might be a better solution.

  • Can LabVIEW handle fast and large amount of data?

    Hello,
    I want to develop server for GPS Tracking system. This system will be communication with about thousands of devices at a time. My question is that whether LabVIEW can b used to implement a server on LINUX platform to handle such fast and heavy transactions without having any hanging of software/system. I can use multithreaded processor, probably, Core-2-Quad processor, to build this application. 
    Please let me know whether LabVIEW can be used to designed such type of application.
    Hasan Baig

    Scott W wrote:
    As discussed previously, you should be able to implement your application using LabVIEW, but you should take care that you design the code properly.  From what I understand, you need labview to be able to process ~256kBps.  LabVIEW will have no problem doing that, but the problem may lay in getting the information from ~1000 computer in 5 seconds.  This will all be determined by:
    1. Your network hardware and set up.
    2. The code used to pull the data from the network.
    There is a library available for using MODBUS with LabVIEW, I suggest reading up on it and learn more about the API of the library.  You can find all that information here: http://sine.ni.com/nips/cds/view/p/lang/en/nid/201711
    If you would rather it be an easy, not too much time required, out of the box solution there is the DSC module that is available for LabVIEW.  This greatly simplifies your application and also makes data communication very effecient.  You can find more information about the DSC module here: http://sine.ni.com/nips/cds/view/p/lang/en/nid/1010
    Just as a point of advice, the DSC module will most likely save you anywhere from 1 to 2 weeks of solid work, depending upon your experience with LabVIEW.
    A couple of notes on the previous (sorry Scott but people tell me they appreciate me being honest)
    1) The Modbus Libraries at one time (I don't know it it has been fixed) was a CPU pig because there where no "zero ms waits" in any of the loops. It takes some time but it can be fixed.
    2) Although DSC may save 1-2 weeks if we don't know LV, it ends up costing me 1-2 weeks of effort every time I go through a major upgrade ( I have been suppoting DSC apps since BridgeVIEW 2.1). PLUS since all of the DSC functions are hidden protected burried.... I have no option to fix DSC. I also believe the Modbus in DSC is going to come through the lookout protocol driver which is supported out of (?) Singapore and I am reduced to e-mail support. Does DSC run on Linux? The files system used to support DSC is complex hard to find out what does what and when last I checked, Security software steps on it functionality with no sign that logging has stopped until we go back to look for it and find out is its not working. It cost one of my customers almost $5K to pay me to trouble shoot why the DSC history was failing on five of thier machines. In the end all I could tell them was, "Unplug the 1st network cable before starting a test." And to the speed of DSC... I find it interesting that the White Paper comparing DSC with Datasocket reads has disappeared from the NI site.
    So....
    Code it yourself so you can fix it.
    Ben
    Ben Rayner
    I am currently active on.. MainStream Preppers
    Rayner's Ridge is under construction

  • Is there any way to connect time capsule to a MacBook Pro directly via USB. I have a large amount of data that I want to back up and it is taking a very long time (35GB is taking 3 hrs, I have 2TB if files in total)...)?

    Perhaps via USB. I have a large amount of data that I want to back up and it is taking a very long time (35GB is taking 3 hrs, I have 2TB if files in total)...? I want to use TimeCapsule as back-up for an archive which is curently stored on a 2 TB WESC HD. 

    No, you cannot backup via direct usb connection..
    But gigabit ethernet is much faster anyway.. are you connected directly by ethernet?
    Is the drive you are backing up from plugged into the TC? That will slow it down something chronic.. plug that drive in by its fastest connection method.. WESC sorry I have no idea. If ethernet use that.. otherwise USB direct to the computer.. always think what way the files come and go.. but since you are copying from the computer everything has to go that way.. it makes things slower if they go over the same cable.. if you catch the drift.

  • Is the only way to import large amount of data and database objects into a primary database is to shutdown the standby, turn off archive log mode, do the import, then rebuild the standby?

    I have a primary database that need to import large amount of data and database objects. 1.) Do I shutdown the standby? 2.) Turn off archive log mode? 3.) Perform the import? 4.) Rebuild the standby? or is there a better way or best practice?

    Instead of rebuilding the (whole) standby, you take an incremental (from SCN) backup from the Primary and restore it on the Standby.  That way, if, for example
    a. Only two out of 12 tablespaces are affected by the import, the incremental backup would effectively be only the blocks changed in those two tablespaces (and some other changes in system and undo) {provided that there are no other changes in the other ten tablespaces}
    b. if the size of the import is only 15% of the database, the incremental backup to restore to the standby is small
    Hemant K Chitale

  • Pull large amounts of data using odata, client API and so takes a long time in project server 2013

    We are trying to pull large amounts of data in project server 2013 using both client API and odata calls, but it seem to take a long time. How is this done
    In project server 2010 we did this creating SQL views in both the reporting database and for list creating a view in the content database. Our IT dept is saying we can't do this anymore. How does a view in Project database or content database create issues?
    As long as we don't add a field in the table. So how's one to do this with creating a view?

    Hello,
    If you are using Project Server 2013 on premise I would recommend using T-SQL against the dbo. schema in the Project Web Database for your reports, this will be far quicker that the APIs. You can create custom objects in the dbo. schema, see the link below:
    https://msdn.microsoft.com/en-us/library/office/ee767687.aspx#pj15_Architecture_DAL
    It is not supported to query the SharePoint content database directly with T-SQL or add any custom objects to the content database.
    Paul
    Paul Mather | Twitter |
    http://pwmather.wordpress.com | CPS |
    MVP | Downloads

  • ERROR MESSAGE WHEN DISPLAYING LARGE RETRIEVING AND DISPLAYING LARGE AMOUNT OF DATA

    Hello,
    Am querying my database(mysql) and displaying my data in a
    DataGrid (Note that am using Flex 2.0)
    It works fine when the amount of data populating the grid is
    not much. But when I have large amount of data I get the following
    error message and the grid is not populated.
    ERROR 1
    faultCode:Server.Acknowledge.Failed
    faultString:'Didn't receive an acknowledge message'
    faultDetail: 'Was expecting
    mx.messaging.messages.AcknowledgeMessage, but receive Null'
    ERROR 2
    faultCode:Client.Error.DeliveryInDoubt
    faultString:'Channel disconnected'
    faultDetail: 'Channel disconnected before and acknowledge was
    received'
    Note that my datagrid is populated when I run the query on my
    Server but does not works on my client pcs.
    Your help would br greatly appreciated here.
    Awaiting a reply.
    Regards

    Hello,
    Am using remote object services.
    USing component (ColdFusion as destination).

  • MY phone is using large amounts of data, when i then go to system services, it s my mapping services thats causing it. what are mapping services and how do i swithch them off. i really need help.

    MY phone is using large amounts of data, when i then go to system services, it s my mapping services thats causing it. what are mapping services and how do i swithch them off. i really need help.

    I Have the same problem, I switched off location services, maps in data, whatever else maps could be involved in nd then just last nite it chewed 100mb... I'm also on vodacom so I'm seeing a pattern here somehow. Siri was switched on however so I switched it off now nd will see what happens. but I'm gonna go into both apple and vodacom this afternoon because this must be sorted out its a serious issue we have on our hands and some uproar needs to be made against those responsible!

  • What java collection for large amount of data and user customizable record

    I'm trying to write an application which operates on large amount of data. I want user could customize data structure (record) from different types of variables(float,int,bool,string,enums). These records should be stored in some kind of Array. Size of record: 1-200 variables; size of Array of those records: about 100000 items (one record every second through whole day). I want these data stored in some embedded database (sqlite, hsqldb) - access using simple JDBC. Could you give me some advise how to design thoses data strucures. Sincerely yours :)
    Ok, maybe I give some example. This will be some C++ code.
    I made an interface:
    class ParamI {
    virtual string toString() = 0;
    virtual void addValue( ParamI * ) = 0;
    virtual void setValue( ParamI * ) = 0;
    virtual BYTE getType() = 0;
    Than I made some template class derived from interface ParamI:
    template <class T>
    class CParam : CParamI {
    public:
         void setValue( T val );
         T getValue();
         string toString();
         void setValue( ParamI *src ) {
              if ( itemType == src->getType() ) {
                   CParam<T> ptr = (CParam<T>)src;
                   value = ptr->value;
    private:
         BYTE itemType;
         T value;
    sample constructor of <int> template:
    template<> CParam<int>::CParam() {
         itemType = ParamType::INTEGER;
    This solution makes me possible to write collection of CParamI:
    std::vector<CParamI*> myCollection;
    CParam<int> *pi = new CParam<int>();
    pi->setValue(10);
    myCollection.push_back((CParamI*)pi);
    Is this correct solution?. My main problem is to get data from the collection. I have to check its data type using getType() method of CParamI interface.
    Please could give me some advise, some idea to make it right using java.

    If you have the requirement that you have to be able to configure on the fly, then what I've done in the past is just put everything into data pairs into a list: something along the line of: (<Vector>, <String>), where the Vector would store your data and String would contain a data type. I would then make a checker to validate the input according to the SQL databypes that I want to support on the project. It's not a big deal with the amount of data you are talking about.
    The problem you're going to have is when you try to allow dynamic definition, on the fly, of data being input to a table that has already been defined. Your DB will not support that, unless you just store that data pair--which I do not suggest.

  • Scrolling of large amount of data with Unscrolled HEADER .How to do???

    I am in much need of scrolling a large amount of data without scrolling the header,i.e. while scrolling,only data will be scrolled,so that I can see the header as well.I am using JSP in struts framework.Waiting for your help friends.Thanks in advance.

    1) Install Google at your machine.
    2) Open it in your favourite web browser.
    3) Note the input field and the buttons.
    4) Enter smart keywords like "scrollable", "table", "fixed" and "header" in that input field.
    5) Press the first button. You'll get a list of links.
    6) Follow the relevant links and read it thouroughly.

  • Large Amount of Data in JSF

    Hello,
    I am using the Table Group component for displaying data in my application designed in Java Studio Creator.
    I have enabled paging on the component. I use CachedRowSet on the bean for the page for getting the data. This works very well at the moment in my development environment. At the moment I am testing on small amount of data.
    I was wondering how does this component perform with very large amounts of data (>75,000 rows). I noticed that there is a button available for users to retrieve all the rows. So I was wondering apart from that instance, when viewing in a paged mode does the component get all the results from the database everytime ?
    Which component would be best suited for displaying large amounts of data in a table format?
    Thanks In Advance!!

    Thanks for your reply. The table control that I use does have paging as a feature and I have enabled it. It still takes time to load the data initially.
    I wonder if it is got to do with the logic of paging. How do you specify which set of 20 records to extract from SQL.
    Thanks for your help!!

  • Bex Report Designer - Large amount of data issue

    Hi Experts,
    I am trying to execute (on Portal) report made in BEx Report Designer, with about 30 000 pages, and the only thing I am getting is a blank page. Everything works fine at about 3000 pages. Do I need to set something to allow processing such large amount of data?
    Regards
    Vladimir

    Hi Sauro,
    I have not seen this behavior, but it has been a while since I tried to send an input schedule that large. I think the last time was on a BPC NW 7.0 SP06 system and it worked OK. If you are on a recent support package, then you should search for relevant notes (none come to mind for me, but searching yourself is always a good idea) and if you don't find one then you should open a support message with SAP, with very specific instructions for recreating the problem from a clean input-schedule.
    Good luck,
    Ethan

  • Advice needed on how to keep large amounts of data

    Hi guys,
    Im not sure whats the best way is to make large amounts of data available to my android  app on the local device.
    For example records of food ingredients, in the 100's?
    I have read and successfully created .db's using this tutorial.
    http://help.adobe.com/en_US/AIR/1.5/devappsflex/WS5b3ccc516d4fbf351e63e3d118666ade46-7d49. html
    However to populate the database I use flash? So this kind of defeats the purpose of it. No point in me shifting a massive array of data from flash to a sql database, when I could access the data direct from the as3 array?
    So maybe I could create the .db with an external program? but then how would I include that .db in the apk file and then deploy it to users android device.
    Or maybe I create a as3 class with an xml object init and use that as a means of data storage?
    Any advice would be appreciated

    You can use any means you like to populate your SQLite database, including using external programs, (temporarily) embedding a text file with SQL statements, executing some SQL from AS3 code etc etc.
    Once you have populated your db, deploy it with your project:
    http://chrisgriffith.wordpress.com/2011/01/11/understanding-bundled-sqlite-databases-in-ai r-for-mobile/
    Cheers, - Jon -

  • Error in Generating reports with large amount of data using OBIR

    Hi all,
    we hve integrated OBIR (Oracle BI Reporting) with OIM (Oracle Identity management) to generate the custom reports. Some of the custom reports contain a large amount of data (approx 80-90K rows with 7-8 columns) and the query of these reports basically use the audit tables and resource form tables primarily. Now when we try to generate the report, it is working fine with HTML where report directly generate on console but the same report when we tried to generate and save in pdf or Excel it gave up with the following error.
    [120509_133712190][][STATEMENT] Generating page [1314]
    [120509_133712193][][STATEMENT] Phase2 time used: 3ms
    [120509_133712193][][STATEMENT] Total time used: 41269ms for processing XSL-FO
    [120509_133712846][oracle.apps.xdo.common.font.FontFactory][STATEMENT] type1.Helvetica closed.
    [120509_133712846][oracle.apps.xdo.common.font.FontFactory][STATEMENT] type1.Times-Roman closed.
    [120509_133712848][][PROCEDURE] FO+Gen time used: 41924 msecs
    [120509_133712848][oracle.apps.xdo.template.FOProcessor][STATEMENT] clearInputs(Object) is called.
    [120509_133712850][oracle.apps.xdo.template.FOProcessor][STATEMENT] clearInputs(Object) done. All inputs are cleared.
    [120509_133712850][oracle.apps.xdo.template.FOProcessor][STATEMENT] End Memory: max=496MB, total=496MB, free=121MB
    [120509_133818606][][EXCEPTION] java.net.SocketException: Socket closed
    at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:99)
    at java.net.SocketOutputStream.write(SocketOutputStream.java:136)
    at weblogic.servlet.internal.ChunkOutput.writeChunkTransfer(ChunkOutput.java:525)
    at weblogic.servlet.internal.ChunkOutput.writeChunks(ChunkOutput.java:504)
    at weblogic.servlet.internal.ChunkOutput.flush(ChunkOutput.java:382)
    at weblogic.servlet.internal.ChunkOutput.checkForFlush(ChunkOutput.java:469)
    at weblogic.servlet.internal.ChunkOutput.write(ChunkOutput.java:304)
    at weblogic.servlet.internal.ChunkOutputWrapper.write(ChunkOutputWrapper.java:139)
    at weblogic.servlet.internal.ServletOutputStreamImpl.write(ServletOutputStreamImpl.java:169)
    at java.io.BufferedOutputStream.write(BufferedOutputStream.java:105)
    at oracle.apps.xdo.servlet.util.IOUtil.readWrite(IOUtil.java:47)
    at oracle.apps.xdo.servlet.CoreProcessor.process(CoreProcessor.java:280)
    at oracle.apps.xdo.servlet.CoreProcessor.generateDocument(CoreProcessor.java:82)
    at oracle.apps.xdo.servlet.ReportImpl.renderBodyHTTP(ReportImpl.java:562)
    at oracle.apps.xdo.servlet.ReportImpl.renderReportBodyHTTP(ReportImpl.java:265)
    at oracle.apps.xdo.servlet.XDOServlet.writeReport(XDOServlet.java:270)
    at oracle.apps.xdo.servlet.XDOServlet.writeReport(XDOServlet.java:250)
    at oracle.apps.xdo.servlet.XDOServlet.doGet(XDOServlet.java:178)
    at oracle.apps.xdo.servlet.XDOServlet.doPost(XDOServlet.java:201)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
    at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
    at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
    at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292)
    at weblogic.servlet.internal.TailFilter.doFilter(TailFilter.java:26)
    at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:42)
    at oracle.apps.xdo.servlet.security.SecurityFilter.doFilter(SecurityFilter.java:97)
    at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:42)
    at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3496)
    at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
    at weblogic.security.service.SecurityManager.runAs(Unknown Source)
    at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2180)
    at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2086)
    at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1406)
    at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
    at weblogic.work.ExecuteThread.run(ExecuteThread.java:173)
    It seems where the querry processing is taking some time we are facing this issue.Do i need to perform any additional configuration to generate such reports?

    java.net.SocketException: Socket closed
         at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:99)
         at java.net.SocketOutputStream.write(SocketOutputStream.java:136)
         at weblogic.servlet.internal.ChunkOutput.writeChunkTransfer(ChunkOutput.java:525)
         at weblogic.servlet.internal.ChunkOutput.writeChunks(ChunkOutput.java:504)
         at weblogic.servlet.internal.ChunkOutput.flush(ChunkOutput.java:382)
         at weblogic.servlet.internal.CharsetChunkOutput.flush(CharsetChunkOutput.java:249)
         at weblogic.servlet.internal.ChunkOutput.checkForFlush(ChunkOutput.java:469)
         at weblogic.servlet.internal.CharsetChunkOutput.implWrite(CharsetChunkOutput.java:396)
         at weblogic.servlet.internal.CharsetChunkOutput.write(CharsetChunkOutput.java:198)
         at weblogic.servlet.internal.ChunkOutputWrapper.write(ChunkOutputWrapper.java:139)
         at weblogic.servlet.internal.ServletOutputStreamImpl.write(ServletOutputStreamImpl.java:169)
         at com.tej.systemi.util.AroundData.copyStream(AroundData.java:311)
         at com.tej.systemi.client.servlet.servant.Newdownloadsingle.producePageData(Newdownloadsingle.java:108)
         at com.tej.systemi.client.servlet.servant.BaseViewController.serve(BaseViewController.java:542)
         at com.tej.systemi.client.servlet.FrontController.doRequest(FrontController.java:226)
         at com.tej.systemi.client.servlet.FrontController.doPost(FrontController.java:128)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
         at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
         at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
         at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292)
         at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:175)
         at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3498)
         at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
         at weblogic.security.service.SecurityManager.runAs(Unknown Source)
         at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2180)
         at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2086)
         at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1406)
         at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
         at weblogic.work.ExecuteThread.run(ExecuteThread.java:17
    (Please help finding a solution in this issue its in production and we need to ASAP)
    Thanks in Advance
    Edited by: 909601 on Jan 23, 2012 2:05 AM

  • With journaling, I have found that my computer is saving a large amount of data, logs of all the changes I make to files; how can I clean up these logs?

    With journaling, I have found that my computer is saving a large amount of data, logs of all the changes I make to files; how can I clean up these logs?
    For example, in Notes, I have written three notes; however if I click on 'All On My Mac' on the side bar, I see about 10 different versions of each note I make, it saves a version every time I add or delete a sentence.
    I also noticed, that when I write an email, Mail saves about 10 or more draft versions before the final is sent.
    I understand that all this journaling provides a level of security, and prevents data lost; but I was wondering, is there a function to clean up journal logs once in a while?
    Thanks
    Roz

    Are you using Microsoft word?  Microsoft thinks the users are idiots. They put up a lot of pointless messages that annoy & worry users.  I have seen this message from Microsoft word.  It's annoying.
    As BDaqua points out...
    When you copy information via edit > copy,  command + c, edit > cut, or command +x, you place the information on the clipboard. When you paste information, edit > paste or command + v, you copy information from the clipboard to your data file.
    If you edit > cut or command + x and you do not paste the information and you quite Word, you could be loosing information.  Microsoft is very worried about this. When you quite Word, Microsoft checks if there is information on the clipboard & if so, Microsoft puts out this message.
    You should be saving your work more than once a day. I'd save every 5 minutes.  command + s does a save.
    Robert

Maybe you are looking for

  • Is Airport Express the best way to extend Time Capsule wifi network?

    I have a Time Capsule which I use for wifi as well as backups. I would like to extend its wifi range. Is an Airport Express the best way to do this? Or would an Airport Extreme get better range? Thanks.

  • How to fix a corrupt QuickTime file?

    Hey all, Today I recorded all of our church services via iMovie (one long 4 hour clip), as I have been doing for more than a month now. However, the guy running the camera turned the camera off before I told iMovie to stop importing, so iMovie litera

  • Duplicate notifications for emails

    I set up gmail when I started using my Droid phone.  I had problems with my last phone and was given the Droid Razr.  Whenever I receive an email, I get the notification twice.  It appears in two places on my phone.  One is under the g for my gmail a

  • License exception inWLS 7.0

    Just installed WLS 7.0 from CD from Java mag. Downloaded license.bea and copied in c:\bea, changed sdk1.3.1 to sdk 1.4.1 since my EJBs use nio, tried start server and got License Exception. What can be a problem? BTW if I try C:\bea>UpdateLicense.cmd

  • Archiving: toss original raw, keep it as smart object in PSD ok?

    For archiving purposes, once a .NEF is embedded in a .psd file as a smart object, I can toss the original .NEF, right?  Cuz the entire reference is contained within the smart object as long as I keep the .psd, right? Thanks