Error in Generating reports with large amount of data using OBIR

Hi all,
we hve integrated OBIR (Oracle BI Reporting) with OIM (Oracle Identity management) to generate the custom reports. Some of the custom reports contain a large amount of data (approx 80-90K rows with 7-8 columns) and the query of these reports basically use the audit tables and resource form tables primarily. Now when we try to generate the report, it is working fine with HTML where report directly generate on console but the same report when we tried to generate and save in pdf or Excel it gave up with the following error.
[120509_133712190][][STATEMENT] Generating page [1314]
[120509_133712193][][STATEMENT] Phase2 time used: 3ms
[120509_133712193][][STATEMENT] Total time used: 41269ms for processing XSL-FO
[120509_133712846][oracle.apps.xdo.common.font.FontFactory][STATEMENT] type1.Helvetica closed.
[120509_133712846][oracle.apps.xdo.common.font.FontFactory][STATEMENT] type1.Times-Roman closed.
[120509_133712848][][PROCEDURE] FO+Gen time used: 41924 msecs
[120509_133712848][oracle.apps.xdo.template.FOProcessor][STATEMENT] clearInputs(Object) is called.
[120509_133712850][oracle.apps.xdo.template.FOProcessor][STATEMENT] clearInputs(Object) done. All inputs are cleared.
[120509_133712850][oracle.apps.xdo.template.FOProcessor][STATEMENT] End Memory: max=496MB, total=496MB, free=121MB
[120509_133818606][][EXCEPTION] java.net.SocketException: Socket closed
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:99)
at java.net.SocketOutputStream.write(SocketOutputStream.java:136)
at weblogic.servlet.internal.ChunkOutput.writeChunkTransfer(ChunkOutput.java:525)
at weblogic.servlet.internal.ChunkOutput.writeChunks(ChunkOutput.java:504)
at weblogic.servlet.internal.ChunkOutput.flush(ChunkOutput.java:382)
at weblogic.servlet.internal.ChunkOutput.checkForFlush(ChunkOutput.java:469)
at weblogic.servlet.internal.ChunkOutput.write(ChunkOutput.java:304)
at weblogic.servlet.internal.ChunkOutputWrapper.write(ChunkOutputWrapper.java:139)
at weblogic.servlet.internal.ServletOutputStreamImpl.write(ServletOutputStreamImpl.java:169)
at java.io.BufferedOutputStream.write(BufferedOutputStream.java:105)
at oracle.apps.xdo.servlet.util.IOUtil.readWrite(IOUtil.java:47)
at oracle.apps.xdo.servlet.CoreProcessor.process(CoreProcessor.java:280)
at oracle.apps.xdo.servlet.CoreProcessor.generateDocument(CoreProcessor.java:82)
at oracle.apps.xdo.servlet.ReportImpl.renderBodyHTTP(ReportImpl.java:562)
at oracle.apps.xdo.servlet.ReportImpl.renderReportBodyHTTP(ReportImpl.java:265)
at oracle.apps.xdo.servlet.XDOServlet.writeReport(XDOServlet.java:270)
at oracle.apps.xdo.servlet.XDOServlet.writeReport(XDOServlet.java:250)
at oracle.apps.xdo.servlet.XDOServlet.doGet(XDOServlet.java:178)
at oracle.apps.xdo.servlet.XDOServlet.doPost(XDOServlet.java:201)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292)
at weblogic.servlet.internal.TailFilter.doFilter(TailFilter.java:26)
at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:42)
at oracle.apps.xdo.servlet.security.SecurityFilter.doFilter(SecurityFilter.java:97)
at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:42)
at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3496)
at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
at weblogic.security.service.SecurityManager.runAs(Unknown Source)
at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2180)
at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2086)
at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1406)
at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
at weblogic.work.ExecuteThread.run(ExecuteThread.java:173)
It seems where the querry processing is taking some time we are facing this issue.Do i need to perform any additional configuration to generate such reports?

java.net.SocketException: Socket closed
     at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:99)
     at java.net.SocketOutputStream.write(SocketOutputStream.java:136)
     at weblogic.servlet.internal.ChunkOutput.writeChunkTransfer(ChunkOutput.java:525)
     at weblogic.servlet.internal.ChunkOutput.writeChunks(ChunkOutput.java:504)
     at weblogic.servlet.internal.ChunkOutput.flush(ChunkOutput.java:382)
     at weblogic.servlet.internal.CharsetChunkOutput.flush(CharsetChunkOutput.java:249)
     at weblogic.servlet.internal.ChunkOutput.checkForFlush(ChunkOutput.java:469)
     at weblogic.servlet.internal.CharsetChunkOutput.implWrite(CharsetChunkOutput.java:396)
     at weblogic.servlet.internal.CharsetChunkOutput.write(CharsetChunkOutput.java:198)
     at weblogic.servlet.internal.ChunkOutputWrapper.write(ChunkOutputWrapper.java:139)
     at weblogic.servlet.internal.ServletOutputStreamImpl.write(ServletOutputStreamImpl.java:169)
     at com.tej.systemi.util.AroundData.copyStream(AroundData.java:311)
     at com.tej.systemi.client.servlet.servant.Newdownloadsingle.producePageData(Newdownloadsingle.java:108)
     at com.tej.systemi.client.servlet.servant.BaseViewController.serve(BaseViewController.java:542)
     at com.tej.systemi.client.servlet.FrontController.doRequest(FrontController.java:226)
     at com.tej.systemi.client.servlet.FrontController.doPost(FrontController.java:128)
     at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
     at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
     at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
     at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
     at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292)
     at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:175)
     at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3498)
     at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
     at weblogic.security.service.SecurityManager.runAs(Unknown Source)
     at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2180)
     at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2086)
     at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1406)
     at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
     at weblogic.work.ExecuteThread.run(ExecuteThread.java:17
(Please help finding a solution in this issue its in production and we need to ASAP)
Thanks in Advance
Edited by: 909601 on Jan 23, 2012 2:05 AM

Similar Messages

  • How do I pause an iCloud restore for app with large amounts of data?

    I am using an iPhone app which is holding 10 Gb of data (media files) .
    Unfortunately, although all data was backed up, my iPhone 4 was faulty and needed to be replaced with a new handset. On restore, the 10Gb of data takes a very long time to restore over wi-fi. If interrupted (I reached the halfway point during the night) to go to work or take the dog for a walk, I end up of course on 3G for a short period of time.
    Next time I am in a wi-fi zone the app is restoring again right from the beginning
    How does anyone restore an app with large amounts of data or pause a restore?

    You can use classifications but there is no auto feature to archive like that on web apps.
    In terms of the blog, Like I have said to everyone that has posted about blog preview images:
    http://www.prettypollution.com.au/business-catalyst-blog
    Just one example of an image at the start of the blog post rendering out, not hard at all.

  • Bex Report Designer - Large amount of data issue

    Hi Experts,
    I am trying to execute (on Portal) report made in BEx Report Designer, with about 30 000 pages, and the only thing I am getting is a blank page. Everything works fine at about 3000 pages. Do I need to set something to allow processing such large amount of data?
    Regards
    Vladimir

    Hi Sauro,
    I have not seen this behavior, but it has been a while since I tried to send an input schedule that large. I think the last time was on a BPC NW 7.0 SP06 system and it worked OK. If you are on a recent support package, then you should search for relevant notes (none come to mind for me, but searching yourself is always a good idea) and if you don't find one then you should open a support message with SAP, with very specific instructions for recreating the problem from a clean input-schedule.
    Good luck,
    Ethan

  • Working with large amount of data

    Hi! I am writing an peer-to-peer video player and I need to operate with huge amounts of data. The downloaded data should be stored in memory for sharing with other peers. Some video files can be up to 2 GB and more. So keep all this data in RAM - not the best solution i think =)
    Since the flash player does not have access to the file system I can not save this data in temporary files.
    Is there a solution to this problem?

    No ideas? very sad ((

  • Pull large amounts of data using odata, client API and so takes a long time in project server 2013

    We are trying to pull large amounts of data in project server 2013 using both client API and odata calls, but it seem to take a long time. How is this done
    In project server 2010 we did this creating SQL views in both the reporting database and for list creating a view in the content database. Our IT dept is saying we can't do this anymore. How does a view in Project database or content database create issues?
    As long as we don't add a field in the table. So how's one to do this with creating a view?

    Hello,
    If you are using Project Server 2013 on premise I would recommend using T-SQL against the dbo. schema in the Project Web Database for your reports, this will be far quicker that the APIs. You can create custom objects in the dbo. schema, see the link below:
    https://msdn.microsoft.com/en-us/library/office/ee767687.aspx#pj15_Architecture_DAL
    It is not supported to query the SharePoint content database directly with T-SQL or add any custom objects to the content database.
    Paul
    Paul Mather | Twitter |
    http://pwmather.wordpress.com | CPS |
    MVP | Downloads

  • How can I edit large amount of data using Acrobat X Pro

    Hello all,
    I need to edit a catalog that contains large amount of data - mainly the product price. Currently I can only export the document into excel file and then paste the new price onto the catalog using Acrobat X Pro one by one, which is extremely time-consuming. I am sure there's a better way to make this faster while keeping the accuracy of the data. Thanks a lot in advance if any one's able to help! 

    Hi Chauhan,
    Yes I am able to edit text/image via tool box, but the thing is the catalog contains more than 20,000 price data and all I can do is deleteing the orginal price info from catalog and replace it with the revised data from excel. Repeating this process over 20,000 times would be a waste of time and manpower... Not sure if I make my situation clear enough? Pls just ask away, I really hope to sort it out, Thanks! 

  • Ni reports crashes with large amounts of data

    I'm using labview 6.02 and am using some of the Ni reports tools to generate a report. I create, what can be, a large data array which is then sent to the "append table to report" vi and printed. The program works fine, although it is slow, as long as the data that I send to the "append table to report" vi is less than about 30kb. If the amount of data is too large Labview terminates execution with no error code or message displayed. Has anyone else had a similar problem? Does anyone know what is going on or better yet how to fix it?

    Hello,
    I was able to print a 100x100 element array of 5-character strings (~50 kB of data) without receiving a crash or error message. However, it did take a LONG time...about 15 minutes for the VI to run, and another 10 minutes for the printer to start printing. This makes sense, because 100x100 elements is a gigantic amount of data to send into the NI-Reports ActiveX object that is used for printing. You may want to consider breaking up your data into smaller arrays and printing those individually, instead of trying to print the giant array at once.
    I hope these suggestions help you out. Good luck with your application, and have a pleasant day.
    Sincerely,
    Darren N.
    NI Applications Engineer
    Darren Nattinger, CLA
    LabVIEW Artisan and Nugget Penman

  • Error when executing report with large data selection in Infoview

    Hello,
    This is regarding Error we are facing in BO system.
    We have installed:
    Business Object Enterprise XI 3.1
    Crystal report 2008
    When we are trying to execute report in BO INFOVIEW on production  system, we are facing following error (After 10 mins of execution).
    ========================================================================================
    Windows Internet Explorer
    Stop running this script?
    As script on this page is causing Internet Explorere to tun slowly.
    If it continues to run, your computer might become
    unresponsive.
    ========================================================================================
    Please note: We are getting data via Crytal report. Only, execution in infoview is giving error.
    If we are executing same report with less data(E.g a days data) then we are getting output in Infoview.
    However, when we execute report with more data (6 months data) then we are getting attached error after 10 min of execution time.
    Helpfull reply will be awarded.
    Regards.
    Edited by: Nirav Shah on Dec 14, 2010 6:19 AM

    If it is indeed a server timeout issue, the following document might be usefull: http://www.sdn.sap.com/irj/boc/index?rid=/library/uuid/606e9338-bf3e-2b10-b7ab-ce76a7e34432
    It's about Troubleshooting Time-outs in BusinessObjects Enterprise XI, but  a lot is still applicable.
    Hope this helps...
    Martijn van Foeken
    Focuzz BI Services
    http://www.focuzz.nl
    http://nl.linkedin.com/in/martijnvanfoeken
    http://twitter.com/mfoeken

  • Out.println() problems with large amount of data in jsp page

    I have this kind of code in my jsp page:
    out.clearBuffer();
    out.println(myText); // size of myText is about 300 kbThe problem is that I manage to print the whole text only sometimes. Very often happens such that the receiving page gets only the first 40 kb and then the printing stops.
    I have made such tests that I split the myText to smaller parts and out.print() them one by one:
    Vector texts = splitTextToSmallerParts(myText);
    for(int i = 0; i < texts.size(); i++) {
      out.print(text.get(i));
      out.flush();
    }This produces the same kind of result. Sometimes all parts are printed but mostly only the first parts.
    I have tried to increase the buffer size but neither that makes the printing reliable. Also I have tried with autoFlush="false" so that I flush before the buffer size gets overflowed; again same result, sometimes works sometimes don't.
    Originally I use such a system where Visual Basic in Excel calls a jsp page. However, I don't think that this matters since the same problems occur if I use a browser.
    If anyone knows something about problems with large jsp pages, I would appreciate that.

    Well, there are many ways you could do this, but it depends on what you are looking for.
    For instance, generating an Excel Spreadsheet could be quite easy:
    import javax.servlet.*;
    import javax.servlet.http.*;
    import java.io.*;
    public class TableTest extends HttpServlet{
         public void doGet(HttpServletRequest request, HttpServletResponse response) throws IOException, ServletException {
              response.setContentType("application/xls");
              PrintWriter out = new PrintWriter(response.getOutputStream());
                    out.println("Col1\tCol2\tCol3\tCol4");
                    out.println("1\t2\t3\t4");
                    out.println("3\t1\t5\t7");
                    out.println("2\t9\t3\t3");
              out.flush();
              out.close();
    }Just try this simple code, it works just fine... I used the same approach to generate a report of 30000 rows and 40 cols (more or less 5MB), so it should do the job for you.
    Regards

  • XY graphing with large amount of data

    I was looking into graphing a fairly substantial amount of data.  It is bursted across serial.
    What I have is 30 values corresponding to remote data sensors.  The data for each comes across together, so I have no problem having the data grouped.  It is, effectively, in an array of size 30.  I've been wanting to place this data in a graph.
    The period varies between 1/5 sec and 2 minutes between receptions (its wireless and mobile, so signal strength varies).  This eliminates waveform graph as the time isn't constant and there's alot of data(So no random NaN insertion).  This leaves me with an XY graph.
    Primary interest is with the last 10 minutes or so, with a desire to see up to an hour or two back.
    The data is farily similar, and I'm tring to possibly split it into groups of 4 or 5 sets of ordered pairs per graph.
    Problems:
    1.  If data comes in slow enough, everything is ok, but the time needed to synchrounously update the graph(s) often can exceed the time it would take to fully recieve the chunk of data which contains these data points.  Thinking asynchrounously is useless, as the graphs need to be reasonably in tune with the most recent data recieved.  I can't have the an exponential growth in the delta of time represented on the graph and the time the last bit of data was recieved.
    2.  I could use some advice on making older data points more sparse to allow for older data to be viewed, but with a sort of 'decay' of old data I don't value that 1/5 second resolution at all.
    I'm most concerned with solving problem 1, but random suggestions on 2 are most welcome.

    I didn't quite get the first question. Could you try to find out where
    exactly the time is consumed in your program and then refine your
    question.
    To the second question (which may also solve the first question). You
    can store all the data to a file as it arrives. Keep the most recent
    data in a let's say shift register of the update loop. Add a data point
    corresponding the start of the mesurement to the data and wire it to a
    XY graph. Make X Scrollbar visible. Handle the event for XY graph:X
    scale change. When the scale is changed i.e. the user scrolls the
    scroll bar, load data from the file and display it on the XY graph.
    This was a little simplified, it's a little more complicated.
    In my project, I am writing an XY graph X Control to handle a bit
    similar issue. I have huge data sets i.e. multichannel recordings 
    with millisecond resolution over many hours. One data set may be
    several gigabytes, so the data won't fit on the main memory of the
    computer at once. I wrote an X control which contains a XY graph. Each
    time the time scale is changed i.e. the scroll bar is scrolled, the X
    control generates a user event if it doesn't have enough data to
    display the time slice requested. The user event is handled outside the
    X Control by loading the appropriate set of data from the disk and
    displaying it on the X Control. The user event can be generated already
    a while before the X Control is out of data. This way the data is
    loaded a bit in advance, which allows seamles scrolling on the XY
    graph. One must notice that the front panel updates must be turned off
    in the X Control when the data is updated and back on after the update
    has finnished. Otherwise the XY graph will flicker annoingly.
    Tomi Maila

  • How to deal with large amount of data

    Hi folks,
    I have a web page that needs 3 maps to serve. Each map will hold about 150,000 entries, and all of them will use about 100MB total. For some users with lots of data (e.g. 1,000,0000 entries), it may use up to 1GB of memory. If few of these high-volume users log in at the same time, it can bring the server down. The data is from the files, I cannot read it on demand because it will be too slow. Loading the data to maps offers me very good performance, but it does not scale. I am thinking to serialize the maps and deserialize them when I need. Is it my only option?
    Thanks in advance!

    JoachimSauer wrote:
    I don't buy the "too slow" argument.
    I'd put the data into a database. They are built to handle that amount of data.++

  • Dealing with large amounts of data

    Hi
    I am new to using Flex and BlazeDS. I can see in the FAQ that binary data transfer from server to Flex app is more efficient. My question is: is there a way to build a Flex databound control (e.g. datagrid) which binds to a SQL query or web service or remoting on the server side and then display infinite amount of data as user pages or scrolls using scrollbar? or does the developer have to write code from scratch to deal with paging or deal with infinite scrollbar by asking server for chunks of data at a time?

    You have to write your own paginating grid. It's easy to do, just make a canvas, throw a grid and some buttons on it, than when user click to the next page, you make a request to the server and when you have a result, set it as a new data model for the grid.
    I would discourage you to return infinite amount of data to the user, provide a search functionality plus pagination.
    Hope that helps.

  • Putting different tables with large amounts of data together

    Hi,
    I need to put different kinds of tables together:
    Example 'table1' with columns: DATE, IP, TYPEN, X1, X2, X3
    To 'table0' with columns DATE, IP, TYPENUMBER.
    TYPEN in table1 needs to be inserted into TYPENUMBER in table0, but through a function which transforms it to some other value.
    There are several other tables like 'table1', but with slighty different columns, that needs to be inserted into the same table ('table0').
    The amount of data in each table is quite huge, so the procedure should be done in small pieces and efficiently.
    Should/Could I use data pump for this?
    Thank you!

    user13036557 wrote:
    How should I continue with this then?
    Should I delete the columns I don't need and transform the data in the table first and then use data pump,
    or should I simply make a procedure going through every row (in smaller pieces) of 'table1' and inserting it to 'table0'?You have both the options .. Please test both of them , calculate time to complete and implement the best .
    Regards
    Rajesh

  • Moving Large amount of data using IMPDP with network link

    Hi Guru,
    Here we are having a requirement to move 2TB of data from production to non-prod using Network_link parameter. What is the process to make it fast.
    Previously we did it but it took 7 days for importing data and index .
    Here i am having an idea can you please guide me is it good to make import faster .
    Step 1) import only metadata .
    Step 2) import only table data using table_exists_action=append or truncate.( Here indexes are allready created in step 1 and import will be fast as per my plan.)
    Please help me the better way if we can.
    Thanks & Regards,
    Venkata Poorna Prasad.S

    You might want to check these as well:
    DataPump Import (IMPDP) Over NETWORK_LINK Is Sometimes Very Slow (Doc ID 1439691.1)
    DataPump Import Via NETWORK_LINK Is Slow With CURSOR_SHARING=FORCE (Doc ID 421441.1)
    Performance Problems When Transferring LOBs Using IMPDP With NETWORK_LINK (Doc ID 1488229.1)

  • Converting tdm to lvm/ working with large amount of data

    I use a PCI 6251 card for data aquisition, and labview version 8; to log 5 channels at 100 Khz for approximately 4-5 million samples on each channel (the more the better). I use the express VI for reading and writing data which is strored in .tdm format (file size of the .tdx file is around 150 MB). I did not store it in lvm format to reduce the time taken to aquire data.
    1. how do i convert this binary file to a .mat file ?
    2. In another approach,  I converted the tdm file into lvm format, this works as long as the file size is small (say 50 MB) bigger than that labview memory gets full and will not save the new file. what is an efficient method to write data (say into lvm format) for big size files without causing labview memory to overflow? I tried saving to multiple files, saving one channel at a time, increased the computer's virtual memory (upto 4880 MB) but i still have problems with 'labview memory full' error.
    3.  Another problem i noticed with labview is that once it is used to aquire data, it occupies a lot of the computer's memory, even after the VI stops running, is ther a way to refresh the memory and is this mainly due to  bad programming?
    any suggestions?

    I assume from your first question that you are attempting to get your data into Matlab.  If that is the case, you have three options:
    You can treat the tdx file as a binary file and read directly from Matlab.  Each channel is a contiguous block of the data type you stored it in (DBL, I32, etc.), with the channels in the order you stored them.  You probably know how many points are in each channel.  If not, you can get this information from the XML in the tdm file.  This is probably your best option, since you won't have to convert anything.
    Early versions of TDM storage (those shipping with LV7.1 or earlier) automatically read the entire file into memory when you load it.  If you have LV7.1, you can upgrade to a version which allows you to read portions of the file by downloading and installing the demo version of LV8.  This will upgrade the shared USI component.  You can then read a portion of your large data set into memory and stream it back out to LVM.
    Do option 2, but use NI-HWS (available on your driver CD under the computer based instruments tab) instead of LVM.  HWS is a hierarchical binary format based on HDF5, so Matlab can read the files directly through its HDF5 interface.  You just need to know the file structure.  You can figure that out using HDFView.  If you take this route and have questions, reply to this post and I will try to answer them.  Note that you may wish to use HWS for your future storage, since its performance is much better than TDM and you can read it from Matlab.  HWS/HDF5 also supports compression, and at your data rates, you can probably pull this off while streaming to disk, if you have a reasonably fast computer.
    Handling large data sets in LabVIEW is an art, like most programming languages.  Check out the tutorial Managing Large Data Sets in LabVIEW for some helpful pointers and code.
    LabVIEW does not release memory until a VI exits memory, even if the VI is not running.  This is an optimization to prevent a repeatedly called VI from requesting the same memory every time it is called.  You can reduce this problem considerably by writing empty arrays to all your front panel objects before you exit your top level VI.  Graphs are a particulary common problem.
    This account is no longer active. Contact ShadesOfGray for current posts and information.

Maybe you are looking for

  • How do I burn a mailbox in Mail to a DVD?

    I'm trying to burn all of my wife's business emails from 2011 to a DVD so we can delte them off the HD.  I did this last year but with the upgrade to Mountain Lion I can't figure it out.  There are approximately 65,000 emails and when I exported it i

  • How do I save a photo I lightroom 5?

    I am trying to save an edited photo in Lightroom 5 so I can work on it in elements 12

  • 3d TV

    What's the difference between a FULL 3d and 3d READY TV????? Solved! Go to Solution.

  • Will iPhone 3.0 have HD Video?

    I know they explained a lot of new features being added out of 100 APIs, but I haven't seen "HD Video" or any of that to say you can watch High Definition TV Shows and Movies on any of the splashes. So will it be on the iPhone 3G or is it a newer pho

  • What comes in the box

    I just got a new MacBook Pro. I got a cleaning cloth, a power cord, and another cord. What is that cord? Thanks for your help!