Help Using a webservice to report back very large data

Hi All,
I am in the process of creating a web service to report back lots of data according to some input params, I am using axis2 to test the soap message.
Now the problem, the webservice will be collection data from the live database which is attached to a web application. It has high usage and I don't want to affect there service.
Do you think it will affect there service and we would be better to report against a backup database or something?
Also a client can request data on a certain user just a plain string ID in the wsdl file (data returned will be very large user history), now I was thinking would it be best to keep this as a single users input or create a array of stings with a max length (set in a xsd property)?
My idea was it might be better that someone just implements another client with single ID and get data for each users, therefore the request could be called
Customer * number of users
Or they array XSD will allow them to report on say 10 at a time?

Hi Steve
It seems like you have alot of different versions of Crystal Reports and Business Objects products and you are getting them mixed up a bit. It can be confusing.
So basically you want to access Crystal reports via the BO SDK and you want the reports to connect to web services.
1. Creating the Report
Since you have BOE XI R2 my guess is that you have a copy of Crystal Reports XI R2 or R1.
Create the report with XI because it comes with a special XML Web Services data source driver. My guess is that you already have the web service created.
2. Publish the report to BOE XI.
Using the report designer publish the report to BOE (Save As).
3. Write code to view the report.
If you need to change the datasource at runtime then you will want to use the Report Application Server (RAS) SDK to do that. If you are only changing the data source to move from Dev/QA/Production then you may not want to do this task at runtime. In the CMC you should be able to change the data source for migration purposes.If you truely need to do it at runtime then you want to do RAS.
Here's a sample to get you started.
http://diamond.businessobjects.com/node/6197
<a href="/blog/10">Rob&#39;s blog - http://diamond.businessobjects.com/robhorne</a>

Similar Messages

  • My macbook pro is not booting up.I haven't backed up my data .apple is offering me to replace the device . could you please help me on hoe do I back up my data

    Please help me back my my data . My system is not booting up.There is some hardware issue.
    Apple is ready to replace my device. But before that I need ro back up my data.

    OK - now it's great that you got into the Recovery partition. When you hold down the option key, do you see your internal hard drive as well? If so, try booting into it and making sure that you set it as your start-up volume in System Preferences.
    If you don't see your hard drive...
    What you should do now is open Disk Utility. You want to see if it recognizes "Macintosh HD" (or whatever you've named your hard drive). If it does, select it and select "Verify Disk". It is likely that you will find some errors - if you do, select "Repair Disk". If the disk repairs successfully, you might want to just reboot and see if you can boot into your normal disk.
    If there are no repairs needed after you verify your disk, try repairing permissions. You'll usually always have some permissions that need to be repaired. After this, try booting into your regular disk.
    Good luck - call back - I'll be up for quite some time.
    Clinton

  • Impact of write-back on large data cubes(Both ASO and BSO)

    Hi All,
    We are in the phase of designing the Essbase Cubes(both ASO and BSO).
    Has anyone encountered issues performing a write-back on large Essbase data Cubes?
    With Regards,
    Madhan

    Hi,
    Do you mean data load from a flat file or sql or simply lock and send? An estimate size of your record set would also help.
    I get 30 million records into an ASO cube in about 5 minutes from SQL using load rules. This is in 11.1.2.1.
    Thanks,
    Nathan

  • Using cl_gui_alv_grid class in report , but when no data , entire session closed ?

    Hi ,
    I am using cl_gui_alv_grid for one of my report . It is working fine when there are reocords , but when there is no record
    entire session will be closed from selection screen itself.
    Having class with data /method declaration and an event class ..
    Appreciate your help..
    Kind regards
    Rama

    Hello Siva ,
    Code an EVENT for NO DATA in the main class. I guess you have already done the same
    Having class with data /method declaration and an event class ..
    Now while implementing the method for that event , don't forget to write LEAVE LIST-PROCESSING also.
    For eg :
    Now RAISE this event when SY-SUBRC is NE 0 for the SELECT statement i.e no data found.

  • Help using multiple if else statements & manual dynamic xml data input to trigger a goto and play.

    Below is code that has a timer countdown that reads off of the computer. Below in bold is code to read "if it reaches the date, go to and play frame (2).
    timer.removeEventListener(TimerEvent.TIMER, updateTime);
      timer.stop();
      gotoAndPlay(2);
    Below is code that is manual input-   I had set up a dynamic txt field in flash named it : raffle_tix_remain When loaded on to the host I can manulally update the xml code and the change will take effect.
    raffle_tix_remain.text = root.loaderInfo.parameters.raffle_tix_remain;
    My question:  Since the raffle_tix_remain is a manual input from a user to xml  Is there a way to tell flash once it refreshes and "raffle_ tix_ remain"  goes to (0) zero gotoAndPlay(2); and let it play like a "sold out" sign
    i guess that would be a  if else statement. 
    Code Below-----------------
    stop();
    var year:Number = 2011;
    var month:Number = 12;
    var day:Number = 30;
    var finalDate:Date = new Date(year,month-1,day);
    var timer:Timer = new Timer(100);
    timer.addEventListener(TimerEvent.TIMER, updateTime);
    timer.start();
    function updateTime(e:TimerEvent):void{
              var now:Date = new Date();
              var remainTime:Number = finalDate.getTime() - now.getTime();
              if (remainTime >0) {
                        var secs:Number = Math.floor(remainTime/1000);
                        var mins:Number = Math.floor(secs/60);
                        var hours:Number = Math.floor(mins/60);
                        var days:Number = Math.floor(hours/24);
                        var secsText:String = (secs%60).toString();
                        var minsText:String = (mins%60).toString();
                        var hoursText:String = (hours%24).toString();
                        var daysText:String = days.toString();
                        if (secsText.length < 2) {secsText = "0" + secsText;}
                        if (minsText.length < 2) {minsText = "0" + minsText;}
                        if (hoursText.length < 2) {hoursText = "0" + hoursText;}
                        if (daysText.length < 2) {daysText = "0" + daysText;}
                        day_txt.text = daysText;
                        hour_txt.text = hoursText;
                        min_txt.text = minsText;
                        sec_txt.text = secsText;
              else {
                        timer.removeEventListener(TimerEvent.TIMER, updateTime);
                        timer.stop();
                        gotoAndPlay(2);

    stop();
    var year:Number = 2011;
    var month:Number = 12;
    var day:Number = 30;
    var finalDate:Date = new Date(year,month-1,day);
      var now:Date = new Date();
    var timer:Timer = new Timer(1000);
    timer.addEventListener(TimerEvent.TIMER, updateTime);
    timer.start();
    function updateTime(e:TimerEvent):void{
             var remainTime:Number = finalDate.getTime() - now.getTime()-e.currentCount;
              if (remainTime >0 || raffle_tix_remain==0) {
                        var secs:Number = Math.floor(remainTime/1000);
                        var mins:Number = Math.floor(secs/60);
                        var hours:Number = Math.floor(mins/60);
                        var days:Number = Math.floor(hours/24);
                        var secsText:String = (secs%60).toString();
                        var minsText:String = (mins%60).toString();
                        var hoursText:String = (hours%24).toString();
                        var daysText:String = days.toString();
                        if (secsText.length < 2) {secsText = "0" + secsText;}
                        if (minsText.length < 2) {minsText = "0" + minsText;}
                        if (hoursText.length < 2) {hoursText = "0" + hoursText;}
                        if (daysText.length < 2) {daysText = "0" + daysText;}
                        day_txt.text = daysText;
                        hour_txt.text = hoursText;
                        min_txt.text = minsText;
                        sec_txt.text = secsText;
              else {
                        timer.removeEventListener(TimerEvent.TIMER, updateTime);
                        timer.stop();
                        gotoAndPlay(2);

  • Are Analytic Workspaces suitable for very large data sets?

    Hi all,
    I have made many different tests with analytic workspaces and i have used the different features (compression,composites...). The results especially for maintenance are disappointing.
    I have a star schema with 6 dimensions. The fact table has 730 million rows, the first dimension has 2,9 million rows and the other 5 dimensions have between 25 and 300 rows each.
    My conclusion is that Analytic Workspaces don't help in situations like mine. The time for maintenance is very very bad not to mention the time for aggregations. I even tried to populate the cube in parts( 90 million rows for the first population) but nothing change. And there are some other problems with storage and tablespaces ( I always get the message unable to extent TEMP tablespace. The size of it is 54Gb).
    Is there something i missing? Has anyone similar problem or different opinion?
    Thank you,
    Ilias

    A few other tips to add to Keith's excellent advice:
    - How many CPU's does your server have? The answer to this may help you decide the optimal level to partition at (in my experience DAY is too low and can cause different problems). What other levels does your time dimension have? Are you loading your cubes in parallel?
    - To speed up your load, partition your underlying fact table with the same granularity as your cubes and place an index on the field mapped to the partition dimension
    - Are you using 10.2.0.3? If so, be very careful with the storage data type you choose when creating your cubes. The default in 10.2.0.3 is NUMBER which has the capability of storing data to 38 significant figures. This usually exceeds what is required for most datasets. If your dataset allows you to use storage of 15 significant figures then you should create your cubes using the DECIMAL data type instead. This will use about one third of the storage space and significantly increase your build speeds (in my experience, more than 3 times faster)
    - Make sure you have preallocated enough permanent and temporary tablespaces for your build. Autoextending can be very time consuming.
    - Consider reducing the amount of aggregation you do in batch. It should not be necessary to pre-aggregate everything in order to get good query performance.
    Generally, I would say that the volume should not be a problem. A single dimension with 2.9 million values is fairly big and can be slow (in OLAP terms) to query but that should not be an obstacle to building it in the first place.
    Good luck!
    Stuart

  • The command,"Dataloadred" is not working for a very large data file

    I have a file which size is 1.8Gb and it has 3 data channels including time channel. I tried reduced loading of the file using the command "Dataloadred" at the interval of 5. The below is the script.
    'start script----------------------------
    call Filenameget("Data","fileread")
    call DataLoadHdFile(Filedlgfile)
    call DataLoadRed("Filedlgfile","2-3",1,0,"First interval value","Start/Width/Number",10,453028984
    ,5,90605794,1)
    'end script-------------------------------------
    The following error message was displayed on the screen, which was resulted from the command, "Dataloadred()".
    "loading file (filename): Insufficient channels are available with the required channel length [3/90605794].
    To resolve this problem, I tried to allocate the channel length to 200M using the command " Chnalloc
    (). But this also resulted in the same kind error as above.
    How Can I resolve this problem and load my data by reducing. Your reply would be appreciate.
    Regards,
    Sky

    Hi,
    Please try this:
    1. Start DIAdem
    2. Open the "settings" menu
    3. Choose "Memory management..."
    4. Click the "Data matrix..." button
    5. In the dialog, set the "No. of channels" and "Channel length" to meet your requirements
    6. Click "Close"
    7. DIAdem will now restart and set up the data channels at the length you have selected
    Depending on how large you set the data matrix size, starting DIAdem may take a few minutes (depending on your computer equipment).
    An alternative to loading and reducing data sets may be the "Register File" function in the DATA window. Once you have clicked on the DATA icon, select the "File" menu and choose the "Register File..." option. Registering files will not actually load a data set, and thus speed up the data acce
    ss part of DIAdem. To learn more about this function, go to the help system and search for "Registering a file in DIAdem DATA".
    I hope this will help you. If you have any additional questions, please let me know.
    Otmar
    Otmar D. Foehner
    Business Development Manager
    DIAdem and Test Data Management
    National Instruments
    Austin, TX - USA
    "For an optimist the glass is half full, for a pessimist it's half empty, and for an engineer is twice bigger than necessary."

  • Very large HEAPDUMP files are generated when executing BI Web reports NW7.0

    Dear Gurus,
    I´m facing a new problem.
    When few users are working in Portal to execute BI Web reports and queries, the system stops and big files are generated in directory: /usr/sap/BWQ/DVEBMGS42/j2ee/cluster/server0
    I´m using AIX 5.3. The files are these:
    2354248 Sep 29 12:31 Snap0001.20080929.153102.766064.trc
    1028628480 Sep 29 12:32 heapdump.20080929.153102.766064.phd
    0 Sep 29 12:32 javacore.20080929.153102.766064.txt
    I was searching for any solution in SAP help and notes. I´ve read a lot of notes:
    SAP Note 1030279 - Reports with very large result sets-BI Java
    SAP Note 1053495 - Settings to get a heapdump with IBM JVM on AIX
    SAP Note 1008619 - java.lang.OutOfMemoryError in BEx Web Applications
    SAP Note 1127156 - Safety belt: Result set iss too large
    SAP Note 723909 - Java VM settings for J2EE
    SAP Note 1150242 - Improving performance/memory in the BEX Analyzer
    SAP Note 950602 - Performance problems when you start a query in Java Web
    SAP Note 1021517 - NW 2004s BI Web memory optimization for large analysis item
    SAP Note 1044330 - Java parameterization for BI systems
    SAP Note 1025307 - Composite note for NW2004s performance: Reporting
    But still not having found an answer to solve this problem.
    In note 1030279 there is written:
    ?We will provide an optimization of the memory requirement in the next Support Package Stack. With this optimization, you can display a report as "stateless", so that the system can then immediately release the memory that is required to set up the result set.?
    I´m using Support Stack 15 for ABAP and Java, but I don´t have more information about this problem or stateless function in another note. And I don´t know how can I use this STATELESS function in BI.
    Anobody have any idea to solve this problem?
    Thanks a lot,
    Carlos

    Hi,
    Heap dumps are generated when there is an inmbalance in Java VM parameterization..
    Also please remove the parameter "-XX:+HeapDumpOnOutOfMemoryError " in Config tool, so that heap dumps will not generated and fill up the disk space..
    My advise is to send the heap dumps to SAP for recommendations.. Meanwhile check with SAP notes for Java VM recommendations..
    Regards
    Thilip Kumar
    Edited by: Thilip Kumar on Sep 30, 2008 5:58 PM

  • Using multiple Collections in Report queries

    I am using many collections in my Report Queries (where conditions) to create the XML needed for running reports (see where condition below). I name each one with a different collection name. I am seeing some performance issues and it looks like it has to do with the amount of collections I am using. It looks like after around 6 collections used in the query, the sql slows down tremendously. It doesn't matter which ones I use, so it doesn't seem to be a specific one.
    Are there issues with using many collections? Should I try using 1 collection and add an identifier to one of the collection columns instead?
    where ...
    and C.FK_SCHOOL IN (SELECT c001
    FROM apex_collections AC1
    WHERE AC1.collection_name = 'SIS_REPORTS_SCHOOLS')
    and nvl(C.ACTIVE,'NONE') IN (SELECT decode(c001,'NONE',nvl(C.ACTIVE,'NONE'),C001)
    FROM apex_collections AC2
    WHERE AC2.collection_name = 'SIS_REPORTS_ACTIVES')
    and nvl(C5.CALENDAR_NO,'NONE') IN (SELECT decode(c001,'NONE',nvl(C5.CALENDAR_NO,'NONE'),C001)
    FROM apex_collections AC3
    WHERE AC3.collection_name = 'SIS_REPORTS_SCHOOL_CALENDAR_NOS')
    and nvl(C10.CLUSTER_CODE,'NONE') IN (SELECT decode(c001,'NONE',nvl(C10.CLUSTER_CODE,'NONE'),C001)
    FROM apex_collections AC4
    WHERE AC4.collection_name = 'SIS_REPORTS_CLUSTER_CODES')
    and nvl(C4.EMPLOYEE_NUMBER,'NONE') IN (SELECT decode(c001,'NONE',nvl(C4.EMPLOYEE_NUMBER,'NONE'),C001)
    FROM apex_collections AC5
    WHERE AC5.collection_name = 'SIS_REPORTS_COUNSELORS')
    and nvl(I3.DISTRICT,'NONE') IN (SELECT decode(c001,'NONE',nvl(I3.DISTRICT,'NONE'),C001)
    FROM apex_collections AC6
    WHERE AC6.collection_name = 'SIS_REPORTS_DISTRICTS')
    and nvl(A1.ETHNIC,'NONE') IN (SELECT decode(c001,'NONE',nvl(A1.ETHNIC,'NONE'),C001)
    FROM apex_collections AC7
    WHERE AC7.collection_name = 'SIS_REPORTS_ETHNICS')
    and nvl(C2.GRADE_LEVEL,'NONE') IN (SELECT decode(c001,'NONE',nvl(C2.GRADE_LEVEL,'NONE'),C001)
    FROM apex_collections AC8
    WHERE AC8.collection_name = 'SIS_REPORTS_GRADE_LEVELS')
    and nvl(C3.HOMEROOM,'NONE') IN (SELECT decode(c001,'NONE',nvl(C3.HOMEROOM,'NONE'),C001)
    FROM apex_collections AC9
    WHERE AC9.collection_name = 'SIS_REPORTS_HOMEROOMS')
    and nvl(C9.PROGRAM_CODE,'NONE') IN (SELECT decode(c001,'NONE',nvl(C9.PROGRAM_CODE,'NONE'),C001)
    FROM apex_collections AC10
    WHERE AC10.collection_name = 'SIS_REPORTS_PROGRAM_CODES')
    and nvl(E2.S_TYPE,'NONE') IN (SELECT decode(c001,'NONE',nvl(E2.S_TYPE,'NONE'),C001)
    FROM apex_collections AC11
    WHERE AC11.collection_name = 'SIS_REPORTS_SCH_TYPES')
    and nvl(A.SEX,'NONE') IN (SELECT decode(c001,'NONE',nvl(A.SEX,'NONE'),C001)
    FROM apex_collections AC12
    WHERE AC12.collection_name = 'SIS_REPORTS_SEXS')
    and nvl(C7.SPECIAL_ED,'NONE') IN (SELECT decode(c001,'NONE',nvl(C7.SPECIAL_ED,'NONE'),C001)
    FROM apex_collections AC13
    WHERE AC13.collection_name = 'SIS_REPORTS_SPECIAL_EDS')
    and nvl(A.STUDENT_ID,'NONE') IN (SELECT decode(c001,'NONE',nvl(A.STUDENT_ID,'NONE'),C001)
    FROM apex_collections AC14
    WHERE AC14.collection_name = 'SIS_REPORTS_STUDENTS')
    and A.PK_ID IN (SELECT decode(c001,'NONE',A.PK_ID,AC16.FK_STU_BASE)
    FROM student_list_det AC16, apex_collections AC15
    WHERE AC15.collection_name = 'SIS_REPORTS_STUDENT_LISTS' and
    AC16.fk_student_list (+) = AC15.c001)

    Hi,
    APEX_COLLECTIONS are special structures that do not have indexes, expect for the one on SEQ_ID. The result is that as the number of collections used in a query increases the number of full table scans on underlying tables kill speed. They are not intended for such heavy use as has been discussed in some of the threads in this forum.
    They are extremely useful , but no good for very large data sets or large number of joins. Global temporary tables are also not an option with Apex.
    You may have to resort to Materialized Views or intermediate/temp tables to get speed.
    Regards,

  • Spooling large data using UTL_FILE

    Hi Everybody!
    While spooling out data into a file using UTL_FILE package I am unable spool the data The column data has a size of 2531 characters
    The column 'source_where_clause_text' has very large data.
    Its not giving any error but the external table is not returning and data
    Following is the code.
    CREATE OR REPLACE PROCEDURE transformation_utl_file AS
    CURSOR c1 IS
    select transformation_nme,source_where_clause_text
    from utility.data_transformation where transformation_nme='product_closing';
    v_fh UTL_FILE.file_type;
    BEGIN
    v_fh := UTL_FILE.fopen('UTLFILELOAD', 'transformation_data.dat', 'w', 32000);--132767
    FOR ci IN c1
    LOOP
    UTL_FILE.put_line( v_fh, ci.transformation_nme ||'~'|| ci.source_where_clause_text);
    -- UTL_FILE.put_line( v_fh, ci.system_id ||'~'||ci.system_nme ||'~'|| ci.system_desc ||'~'|| ci.date_stamp);
    END LOOP;
    UTL_FILE.fclose( v_fh );
    exception
    when utl_file.invalid_path then dbms_output.put_line('Invalid Path');
    END;
    select length(
    '(select to_char(b.system_id) || to_date(a.period_start_date,''dd-mon-yyyy'') view_key, b.system_id, to_date(a.period_start_date,''dd-mon-yyyy'') period_start_date, to_date(a.period_end_date,''dd-mon-yyyy'') period_end_date, to_date(a.clos
    ing_date,''dd-mon-yyyy'') closing_date from ((select decode(certification_type_code, ''A'', ''IDESK_PRODUCTS_PIPELINE'',''C'', ''IDESK_PRODUCTS_COMMITMENT_LINKAGE'') system_nme, to_char(to_date(''01'' || lpad(trim(to_char(certification_as_of_month_yr)),6,''0''),''ddmmyy
    yy''),''dd-mon-yyyy'') period_start_date, to_char(last_day(to_date(''12'' || lpad(trim(to_char(certification_as_of_month_yr)),6,''0''),''ddmmyyyy'')),''dd-mon-yyyy'') period_end_date, to_char(trunc(certification_datetime_stamp), ''dd-mon-yyyy'') closing_date from odsu
    pload.prod_monthly_certification where certification_type_code in (''A'',''C'') minus select trim(system_nme), to_char(period_start_date, ''dd-mon-yyyy''), to_char(period_end_date, ''dd-mon-yyyy''), to_char(closing_date, ''dd-mon-yyyy'') from utility.system_closing
    statusv where system_nme in (''IDESK_PRODUCTS_PIPELINE'', ''IDESK_PRODUCTS_COMMITMENT_LINKAGE'')) union all (select ''BMS Commitment Link'' system_nme, to_char(to_date(''01'' || lpad(trim(to_char(certification_as_of_month_yr)),6,''0''),''ddmmyyyy''),''dd-mon-yyyy'')
    period_start_date, to_char(last_day(to_date(''12'' || lpad(trim(to_char(certification_as_of_month_yr)),6,''0''),''ddmmyyyy'')),''dd-mon-yyyy'') period_end_date, to_char(trunc(certification_datetime_stamp), ''dd-mon-yyyy'') closing_date from odsupload.prod_monthly_c
    ertification where certification_type_code = ''C'' minus select trim(system_nme), to_char(period_start_date, ''dd-mon-yyyy''), to_char(period_end_date, ''dd-mon-yyyy''), to_char(closing_date, ''dd-mon-yyyy'') from utility.system_closing_status_v where system_nme
    = ''BMS Commitment Link'') union all (select ''BMS'' system_nme, to_char(to_date(''01'' || lpad(trim(to_char(certification_as_of_month_yr)),6,''0''),''ddmmyyyy''),''dd-mon-yyyy'') period_start_date, to_char(last_day(to_date(''12'' || lpad(trim(to_char(certification_as_
    of_month_yr)),6,''0''),''ddmmyyyy'')),''dd-mon-yyyy'') period_end_date, to_char(trunc(certification_datetime_stamp), ''dd-mon-yyyy'') closing_date from odsupload.prod_monthly_certification where certification_type_code = ''A'' minus select trim(system_nme), to_char
    (period_start_date, ''dd-mon-yyyy''), to_char(period_end_date, ''dd-mon-yyyy''), to_char(closing_date, ''dd-mon-yyyy'') from utility.system_closing_status_v where system_nme = ''BMS'')) a, utility.system_v b where a.system_nme = b.system_nme)') length1
    from dual
    --2531
    begin
    SSUBRAMANIAN.transformation_utl_file;
    end;
    create table transformation_utl
    TRANSFORMATION_NME VARCHAR2(40),
    SOURCE_WHERE_CLAUSE_TEXT VARCHAR2(4000)
    ORGANIZATION external
    type oracle_loader
    default directory UTLFILELOAD
    ACCESS PARAMETERS
    records delimited by newline CHARACTERSET US7ASCII
    BADFILE UTLFILELOAD:'transformation.bad'
    LOGFILE UTLFILELOAD:'transformation.log'
    fields TERMINATED by "~"
    LOCATION ('transformation_data.dat')
    ) REJECT LIMIT UNLIMITED
    select * from transformation_utl

    after running the procedure, did you verify that the file 'transformation_data.dat' has data? open it, make sure it's correct. maybe it has no data, and that's why the external table doesn't show anything.
    also, check the LOG and BAD files after selecting from the external table. maybe they have ERRORS in them (or all the data is going to BAD because you defined something wrong).

  • I have an iPod 3g, am I ever able to use iCloud? It would be VERY helpful to me.

    I have an iPod 3g, am I ever able to use iCloud? It would be VERY helpful to me.

    Yes, but you must update to iOS 5 first.
    iTunes: Backing up, updating, and restoring iOS software
    B-rock

  • I lost my iPhone device, how can I get my data back on another one without using an iCloud backup just back up on i Tunes, Please Help.

    I lost my iPhone device, how can I get my data back on another one without using an iCloud backup just back up on i Tunes, Please Help.??

    You can find the backup files and then copy them to a safe place if you are worrying about this.
    iTunes places the backup files in these places:
    Mac: ~/Library/Application Support/MobileSync/Backup/
    The "~" represents your Home folder. If you don't see Library in your Home folder, hold Option and click the Go menu.
    Windows Vista, Windows 7, and Windows 8: \Users\(username)\AppData\Roaming\Apple Computer\MobileSync\Backup\
    To quickly access the AppData folder, click Start. In the search bar, type %appdata%, then press Return.
    Windows XP: \Documents and Settings\(username)\Application Data\Apple Computer\MobileSync\Backup\
    To quickly access the Application Data folder, click Start, then choose Run. In the search bar, type %appdata%, then click OK.

  • Help needed  while exporting crystal reports to HTML file format using java

    Help needed  while exporting crystal reports to HTML file format using java api(not using crystalviewer).i want to download the
    html file of the report
    thanks

    the ReportExportFormat class does not have HTML format, it has got to be XML. Export to HTML is available from CR Designer only.
    Edited by: Aasavari Bhave on Jan 24, 2012 11:37 AM

  • I have a Windows 8.1 machine.  I used a projecteor a while back in a Power Point presentation.  Now my menus for Adobe are very small and hard to read.  The print command is so compressed that I cannot read or use it.  How do I fix this problem?

    I used a projecteor a while back in a Power Point presentation.  Now my menus for Adobe are very small and hard to read.  The print command is so compressed that I cannot read or use it.  How do I fix this problem?

    There is no application called "Adobe" - you are either working with Adobe Acrobat or the free Adobe Reader. Open up the preferences for the application (Edit>Preferences), then go to the "General" category and modify the settings for "Scale for screen resolution". You will have to restart the application after you do that. Does that fix your problems?

  • Hi.I need help.my iphone was Stolen .how can I get back my iphone. Serial No.DX*****PMW.could you help me find my iphone.thank you very much.could you send me the ICCID number to help me find the iphone

    Hi.I need help.my iphone was Stolen .how can I get back my iphone.  Serial No.DX******PMW.could you help me find my iphone.thank you very much.could you send me the ICCID number to help me find the iphone.if you can help me.
    <Personal Information Edited by Host>
                                                                                                                                                                   Sincerely a Chinese girl really need your help

    Sorry we are all users on this User  Community No Apple Staff
    Read this .
    http://support.apple.com/kb/HT2526
    Apple do not and cannot assist in finding stolen property ,that is the responsility of your Police

Maybe you are looking for

  • Using JCo as an RFCServer in a J2EE Container (Threading Issue)

    Hello, I want to use JCo as an RFCServer in a J2EE Container (e.g. JBoss, BEA WLS or WAS6.40). Threrefore I use a the JCo.Server class as shown in Example5 in the JCo Examples. But the JCo.Server class starts a thread (JCo.ServerThread) for each Serv

  • Drag and Drop Image to PDF with Java Script

    Hi, I have to create some PDF reports and contained within these reports there are numerous images, some as many as 30-40. This is really time consuming. Can anyone please advise if there is some java script available so I may Drag & Drop an image on

  • How do i get my computer to recognize my ipod touch?

    I have windows 7 and have no problem with my IPhone yet when I connect a new generation 5 ipod touch I hear the tone from the computer as if it is connected but I do not see it in the windows explorer or on I Tunes. Why can I not see it?

  • Need to test if a column have unique values or not

    Hi all, in ETL process I need to check if some cols have unique values or not using sql or plsql. Suppose we need to load a big file data with external table initially and then test if values on one or more columns have unique values  in order to pro

  • Sort by year

    I'd really love to sort my music by 'year' within my ipod, i.e. the same as you can sort by 'album', 'artist', 'genre' etc. I can bring up the 'year' column in itunes, however, can't find an option to duplicate this onto my ipod. I'd happily swap 'ge