HT1577 Movie is too large and is taking too much time.  How do I cancel?

How do I cancel a movie which is taking all my GB for the iPhone hotspot?

That true...
I have solved my problem invokeing setFetchSize on ResultSet object.
like ResultSet.setFetchSize(1000).
But The problem sorted out for the less than 1 lack records. Still I want to do the testing for more than 1 lack records.
Actually I had read a one nice article on net
[http://www.precisejava.com/javaperf/j2ee/JDBC.htm#JDBC114]
They have written a solutions for such type of the problem but they dont give any examples. Without examples i dont find how to resolve this type of the problem.
They gave two solutions i,e
Fetch small amount of data iteratively instead of fetching whole data at once
Applications generally require to retrieve huge data from the database using JDBC in operations like searching data. If the client request for a search, the application might return the whole result set at once. This process takes lot of time and has an impact on performance. The solution for the problem is
1.     Cache the search data at the server-side and return the data iteratively to the client. For example, the search returns 1000 records, return data to the client in 10 iterations where each iteration has 100 records.
// But i don't understand How can I do it in java.
2. Use Stored procedures to return data iteratively. This does not use server-side caching rather server-side application uses Stored procedures to return small amount of data iteratively.
// But i don't understand How can I do it in java.
If you know any one of these solutions then can you please give me examples to do it.
Thanks in Advance,
Shailendra

Similar Messages

  • Large records retrieve from network took too much time.How can I improve it

    HI All
    I have Oracle server 10g, and I have tried to fetch around 200 thousand (2 lack ) records. I have used Servlet that is deploy into Jboss 4.0.
    And this records come from network.
    I have used simple rs.next_ method but it took too much time. I have got the only 30 records with in 1 sec. So if I want all these 2 lacks records system take around more than 40 min. And my requirement is that it has to be retrieve within 40 min.
    Is there any another way around this problem? Is there any way that at one call Result set get 1000 records at one time?
    As I read somewhere that “ If we use a normal ResultSet data isn't retrieved until you do the next call. The ResultSet isn't a memory table, it's a cursor into the database which loads the next row on request (though the drivers are at liberty to anticipate the request). “
    So if we can pass the a request to around 1000 records at one call then maybe we can reduce time.
    Has anyone idea How to improve this problem?
    Regards,
    Shailendra Soni

    That true...
    I have solved my problem invokeing setFetchSize on ResultSet object.
    like ResultSet.setFetchSize(1000).
    But The problem sorted out for the less than 1 lack records. Still I want to do the testing for more than 1 lack records.
    Actually I had read a one nice article on net
    [http://www.precisejava.com/javaperf/j2ee/JDBC.htm#JDBC114]
    They have written a solutions for such type of the problem but they dont give any examples. Without examples i dont find how to resolve this type of the problem.
    They gave two solutions i,e
    Fetch small amount of data iteratively instead of fetching whole data at once
    Applications generally require to retrieve huge data from the database using JDBC in operations like searching data. If the client request for a search, the application might return the whole result set at once. This process takes lot of time and has an impact on performance. The solution for the problem is
    1.     Cache the search data at the server-side and return the data iteratively to the client. For example, the search returns 1000 records, return data to the client in 10 iterations where each iteration has 100 records.
    // But i don't understand How can I do it in java.
    2. Use Stored procedures to return data iteratively. This does not use server-side caching rather server-side application uses Stored procedures to return small amount of data iteratively.
    // But i don't understand How can I do it in java.
    If you know any one of these solutions then can you please give me examples to do it.
    Thanks in Advance,
    Shailendra

  • Sites Taking too much time to open and shows error

    hi, 
    i've setup sharepoint 2013 environement correctly and created a site collection everything was working fine but suddenly now when i am trying to open that site collection or central admin site it's taking too much time to open a page but most of the time
    does not open any page or central admin site and shows following error
    event i go to logs folder under 15 hive but nothing useful found please tell me why it takes about 10-12 minutes to open a site or any page and then shows above shown error. 

    This usually happens if you are low on hardware requirements.  Check whether your machine confirms with the required software and hardware requirements.
    https://technet.microsoft.com/en-us/library/cc262485.aspx
    http://sharepoint.stackexchange.com/questions/58370/minimum-real-world-system-requirements-for-sharepoint-2013
    Please remember to up-vote or mark the reply as answer if you find it helpful.

  • Creative Cloud is taking too much time to load and is not downloading the one month trial for Photoshop I just paid money for.

    Creative Cloud is taking too much time to load and is not downloading the one month trial for Photoshop I just paid money for.

    stop the download if it's stalled, and restart your download.

  • Hi from the last two days my iphone have very slow to open the apps and very slow when i check the notification window , it taking too much time to open when i tap to down . help me to resolve the issue.

    Hi,  from the last two days my iphone( iphone 4 with ios 5) have very slow to open the apps and very slow when i check the notification window , it taking too much time to open when i tap to down . help me to resolve the issue.

    The Basic Troubleshooting Steps are:
    Restart... Reset... Restore...
    iPhone Reset
    http://support.apple.com/kb/ht1430
    Try this First... You will Not Lose Any Data...
    Turn the Phone Off...
    Press and Hold the Sleep/Wake Button and the Home Button at the Same Time...
    Wait for the Apple logo to Appear and then Disappear...
    Usually takes about 15 - 20 Seconds... ( But can take Longer...)
    Release the Buttons...
    Turn the Phone On...
    If that does not help... See Here:
    Backing up, Updating and Restoring
    http://support.apple.com/kb/HT1414

  • Taking too much time and not connecting to ipad software server

    taking too much time and not connecting to ipad software server and not updating.. why

    because everyone is doing the same thing you are... be patient, and you will get the update... I had to grab the update over 3G (it won't let you download or install, but it got me in the door) and then switch back to wifi and download and install.

  • HT201250 my time capsule is taking too much time indexing backup and then taking longer time to back up ( 207 days ) or longer !!! what shall i do ?

    my time capsule is taking too much time indexing backup and then taking longer time to back up ( 207 days ) or longer !!! what shall i do ?

    Try 10.7.5 supplemental update.
    This update seems to have solved this problem for many.
    Best.

  • Hello guys my i phone touch is not working on edges and some touch problem and it is not taking too much time too respond it is purshecd in kuwait can i get replace in india

    hello guys my i phone touch is not working on edges and some touch problem and it is not taking too much time too respond it is purshecd in kuwait can i get replace in india it is i phone 4s

    saikumar3 wrote:
    ...it is purshecd in kuwait can i get replace in india it is i phone 4s
    No, warranty is not International and is only valid in country of purchase.

  • Initramfs too slow and ISR taking too much time !

    On my SMP (2 core with 10siblings each) platform i have a PCIe deivce which is a video card which does DMA transfers to the application buffers.
    This works fine with suse13.1 installed on a SSD drive.
    When i took the whole image and created it as a initramfs and running from it (w/o any hdd) the whole system is very slow and i am seeing PCIe ISRs taking too much time and hecne the driver is failing !
    any help on this is much appreciated.
    there is a first initramfs iage which is the usual initrd the opensuse installs and i patched it to unpack the second initramfs (full rootfs of 1.8GB) in to RAM (32GB) as tmpfs and exec_init from it.
    Last edited by Abhayadev S (2015-05-21 16:28:38)

    Abhayadev S,
    Although your problem definitely looks very interesting, we can't help with issues with suse linux.
    Or are you considering to test archlinux on that machine ?

  • HT201328 Hello,  I needed to do a hard reset on my iPhone 3GS and it's taking me too much time. It's taking almost 12 hours by now!!!!   I did it by going to "Settings, General, Reset, Erase All Content and Settings", confirm it and then the iPhone turns

    Hello,
    I needed to do a hard reset on my iPhone 3GS and it's taking me too much time. It's taking almost 3 hours by now!!!!
    I did it by going to "Settings, General, Reset, Erase All Content and Settings", confirm it and then the iPhone turns off, gets the screen black with only the animated circle going on. And it's been there since then.
    Is this a symptom of something bad? Should I do the hard reset by another way?

    Hello Ali Syed,
    It sounds like your phone is stuck at the spinning gear icon on the screen after Erasing all Content and Settings on your device. I would start by resetting the device with this process:
    To reset, press and hold both the Sleep/Wake and Home buttons for at least 10 seconds, until you see the Apple logo.
    From: Turn your iOS device off and on (restart) and reset
              http://support.apple.com/kb/ht1430
    If the issue persists you may need to put the phone into recovery mode, and then restore it with iTunes as a new device:
    If you can't update or restore your iOS device
    http://support.apple.com/kb/ht1808
    Thank you for using Apple Support Communities.
    Take care,
    Sterling

  • Report taking too much time in the portal

    Hi freiends,
    we have developed a report on the ods,and we publish the same on the portal.
    the problem is when the users are executing the report at the same time it is taking too much time.because of this the perfoemance is very poor.
    is there any way to sort out this issue,like can we send the report to the individual user's mail id
    so that they can not log in to the portal
    or can we create the same report on the cube.
    what could be the main difference if the report made on the cube or ods?
    please help me
    thanks in advance
    sridath

    Hi
    Try this to improve performance of query
    Find the query Run-time
    where to find the query Run-time ?
    557870 'FAQ BW Query Performance'
    130696 - Performance trace in BW
    This info may be helpful.
    General tips
    Using aggregates and compression.
    Using less and complex cell definitions if possible.
    1. Avoid using too many nav. attr
    2. Avoid RKF and CKF
    3. Many chars in row.
    By using T-codes ST03 or ST03N
    Go to transaction ST03 > switch to expert mode > from left side menu > and there in system load history and distribution for a particular day > check query execution time.
    /people/andreas.vogel/blog/2007/04/08/statistical-records-part-4-how-to-read-st03n-datasets-from-db-in-nw2004
    /people/andreas.vogel/blog/2007/03/16/how-to-read-st03n-datasets-from-db
    Try table rsddstats to get the statistics
    Using cache memory will decrease the loading time of the report.
    Run reporting agent at night and sending results to email. This will ensure use of OLAP cache. So later report execution will retrieve the result faster from the OLAP cache.
    Also try
    1. Use different parameters in ST03 to see the two important parameters aggregation ratio and records transferred to F/E to DB selected.
    2. Use the program SAP_INFOCUBE_DESIGNS (Performance of BW infocubes) to see the aggregation ratio for the cube. If the cube does not appear in the list of this report, try to run RSRV checks on the cube and aggregates.
    Go to SE38 > Run the program SAP_INFOCUBE_DESIGNS
    It will shown dimension Vs Fact tables Size in percent.If you mean speed of queries on a cube as performance metric of cube,measure query runtime.
    3. To check the performance of the aggregates,see the columns valuation and usage in aggregates.
    Open the Aggregates...and observe VALUATION and USAGE columns.
    "---" sign is the valuation of the aggregate. You can say -3 is the valuation of the aggregate design and usage. ++ means that its compression is good and access is also more (in effect, performance is good). If you check its compression ratio, it must be good. -- means the compression ratio is not so good and access is also not so good (performance is not so good).The more is the positives...more is useful the aggregate and more it satisfies the number of queries. The greater the number of minus signs, the worse the evaluation of the aggregate. The larger the number of plus signs, the better the evaluation of the aggregate.
    if "-----" then it means it just an overhead. Aggregate can potentially be deleted and "+++++" means Aggregate is potentially very useful.
    In valuation column,if there are more positive sign it means that the aggregate performance is good and it is useful to have this aggregate.But if it has more negative sign it means we need not better use that aggregate.
    In usage column,we will come to know how far the aggregate has been used in query.
    Thus we can check the performance of the aggregate.
    Refer.
    http://help.sap.com/saphelp_nw70/helpdata/en/b8/23813b310c4a0ee10000000a114084/content.htm
    http://help.sap.com/saphelp_nw70/helpdata/en/60/f0fb411e255f24e10000000a1550b0/frameset.htm
    performance ISSUE related to AGGREGATE
    Note 356732 - Performance Tuning for Queries with Aggregates
    Note 166433 - Options for finding aggregates (find optimal aggregates for an InfoCube)
    4. Run your query in RSRT and run the query in the debug mode. Select "Display Aggregates Found" and "Do not use cache" in the debug mode. This will tell you if it hit any aggregates while running. If it does not show any aggregates, you might want to redesign your aggregates for the query.
    Also your query performance can depend upon criteria and since you have given selection only on one infoprovider...just check if you are selecting huge amount of data in the report
    Check for the query read mode in RSRT.(whether its A,X or H)..advisable read mode is X.
    5. In BI 7 statistics need to be activated for ST03 and BI admin cockpit to work.
    By implementing BW Statistics Business Content - you need to install, feed data and through ready made reports which for analysis.
    http://help.sap.com/saphelp_nw70/helpdata/en/26/4bc0417951d117e10000000a155106/frameset.htm
    /people/vikash.agrawal/blog/2006/04/17/query-performance-150-is-aggregates-the-way-out-for-me
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/1955ba90-0201-0010-d3aa-8b2a4ef6bbb2
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
    http://help.sap.com/saphelp_nw04/helpdata/en/c1/0dbf65e04311d286d6006008b32e84/frameset.htm
    You can go to T-Code DB20 which gives you all the performance related information like
    Partitions
    Databases
    Schemas
    Buffer Pools
    Tablespaces etc
    use tool RSDDK_CHECK_AGGREGATE in se38 to check for the corrupt aggregates
    If aggregates contain incorrect data, you must regenerate them.
    202469 - Using aggregate check tool
    Note 646402 - Programs for checking aggregates (as of BW 3.0B SP15)
    You can find out whether an aggregate is usefull or useless you can find out through a proccess of checking the tables RSDDSTATAGGRDEF*
    Run the query in RSRT with statistics execute and come back you will get STATUID... copy this and check in the table...
    This gives you exactly which infoobjects it's hitting, if any one of the object is missing it's useless aggregate.
    6
    Check SE11 > table RSDDAGGRDIR . You can find the last callup in the table.
    Generate Report in RSRT
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/4c0ab590-0201-0010-bd9a-8332d8b4f09c
    Business Intelligence Journal Improving Query Performance in Data Warehouses
    http://www.tdwi.org/Publications/BIJournal/display.aspx?ID=7891
    Achieving BI Query Performance Building Business Intelligence
    http://www.dmreview.com/issues/20051001/1038109-1.html
    Assign points if useful
    Cheers
    SM

  • Procedure Taking too Much Time

    Hi Guys,
    I have package which contains number of function and procedure which is link with Oracle 's standard report called Prodction Shortage Report , it is taking too much time and even now days it is not finsih after three days.
    So I am looking for some tuning steps which I can follow for all the queries one by one, I tried touse Explain Plan what it is showing not too much difference after movement of column from one location to another etc , even I have updated the existing indexes and also created new indexes.
    Thanks
    Shishu Paul

    Shishu,
    Is this report part of an Oracle product such as the Enterprise Business Suite (EBS)? If so, I recommend you create a Service Request (SR) with Oracle and have them tune the report. Doing your own tuning could cause you some support problems later on.
    Just my two cents...
    Craig

  • ODS to CUBE load taking too much time..

    Hi all ,
    we are loading the data from our ZODS to ZCUBE, but the data load is taking too much time , we haven't created any indexes , we alsotried by making infosource for the ODS but still tha same problem .. It is always showing 0 from 345674 records that is the records are not getting extracted from ODS .
    Can anybody help me in this regards , it is a bit urgent ..
    Thanks in advance.

    Hi,
    there are a few you can check. First you should check if this job hasn't ended in a dump with ST22.
    The next thing you can do, if the job doesn't end abnormaly, is to reduce the amount of records processed at the same time. Sometimes the system has trouble if the amount of records that it has to process is too large. Go to the InfoPackage -> DataS. Default Data Transfer -> Fill the maximum to 10% of de the default value. Try to run the load again.
    If the job still doesn't finish then you have to check wether or not there are any ABAP routines and/or formula involved in the update rule ? Maybe they running in a loop.
    regards,
    Raymond Baggen
    Uphantis bv

  • Importing is taking too much time (2 DAYS)

    Dear All,
    I'm Importing below support packages together in a queue @ SAP Solution manger 4  .
    SAPKB70015             Basis Support Package 15 for 7.00
    SAPKA70015             ABA Support Package 15 for 7.00
    SAPKITL426             ST 400: Patch 0016, CRT for SAPKB70015
    SAPKIBIIP6             BI_CONT 703: patch 0006
    SAPKIBIIP7             BI_CONT 703: patch 0007
    SAPKIBIIP8             BI_CONT 703: patch 0008
    SAPK-40010INCPRXRPM    CPRXRPM 400: patch 0010
    SAPK-40011INCPRXRPM    CPRXRPM 400: patch 0011
    SAPK-40012INCPRXRPM    CPRXRPM 400: patch 0012
    SAPKIPYJ7E             PI_BASIS 2005_1_700: patch 0014
    SAPKW70016             BW Support Package 16 for 7.00
    importing is taking too much time (2 DAYS) in main import phase, I have seen SLOG, there are many rows " I am waiting 1 sec and 6 sec . and also checked transaction code STMS all support packages  imported except  one support package "SAPKW70016  "
    Please advice.
    Best Regards'
    HE

    Hello Mohan,
    The DBTABLOG table does get large, the best is to switch off logging. If that's not possible, increase the frequency of your delete job, also explore one more alternative have a look at the archival object: BC_DBLOGS, you could archive old records (in accordance with your customer's data retention policies) to reduce the size of the table.
    Also, have a look at the following notes, they will advise you on how to improve the performance of your delete job:
    Note 531923 - Audit Trail: Indexes on table DBTABLOG
    Note 579980 - Table logs: Performance during access to DBTABLOG
    Regards,
    Siddhesh

  • Uploading Taking too much Time

    Hi all,
    I am uploading my video files from client computer to server, the server has 1 tera byte storage, 500 GB is free. It is taking too much time to get uploaded. even there is no other computer connected. only one client and a server. the left pane "jobs in progress" is shown and never ends.
    what should be the possible reason for that.
    Junaid
    Broadcast Engineer

    Our server FCS was hellishly slow, to the point where are producers never used it as it would take them forever to upload their capture scratches.
    We recently installed a Fiber network with an Xsan and switched all of the producers over to that network. Now it is just as fast (if not faster) than using an external USB/FireWire drive. Now the only challenge is to get them to use FCS again...
    But really, if your on a 10/100 network then you might as well not use FCS for any type of real movie editing. Even on our 10/100/1000 network it was still slow enough to make everyone hate using FCS. So if you are planning on using FCS on a wide-scale deployment, prepare to shell out some cash for Fiber.

  • Import taking too much time

    Hi all
    I'm quite new to database administration.my problem is that i'm trying to import dump file but one of the table taking too much time to import .
    Description::
    1 Export taken from source database which is in oracle 8i character set is WE8ISO8859P1
    2 I am taking import in 10 g with character set utf 8 and national character set is also same.
    3 dump file is about 1.5 gb.
    4 I got error like value is too large for column so in target db which is in utf 8 i convert all coloumn from varchar2 to char.
    5 while taking a import some table get import very fast bt at perticular table it get very slow
    please help me thanks in advance.......

    Hello,
    4 I got error like value is too large for column so in target db which is in utf 8 i convert all coloumn from varchar2 to char.
    5 while taking a import some table get import very fast bt at perticular table it get very slow For the point *4* it's typically due to the CHARACTER SET conversion.
    You export data in WE8ISO8859P1 and import in UTF8. In WE8ISO8859P1 characters are encoded in *1 Byte* so *1 CHAR = 1 BYTE*. In UTF8 (Unicode) characters are encoded in up to *4 Bytes* so *1 CHAR > 1 BYTE*.
    For this reason you'll have to modify the length of your CHAR or VARCHAR2 Columns, or add the CHAR option (by default it's BYTE) in the column datatype definition of the Tables. For instance:
    VARCHAR2(100 CHAR)The NLS_LENGTH_SEMANTICS parameter may be used also but it's not very well managed by export/Import.
    So, I suggest you this:
    1. set NLS_LENGTH_SEMANTICS=CHAR on your target database and restart the database.
    2. Create from a script all your Tables (empty) on the target database (without the indexes and constraints).
    3. Import the datas to the Tables.
    4. Import the Indexes and constraints.You'll have more information on the following Note of MOS:
    Examples and limits of BYTE and CHAR semantics usage (NLS_LENGTH_SEMANTICS) [ID 144808.1]For the point *5* it may be due to the conversion problem you are experiencing, it may also due to some special datatype like LONG.
    Else, I have a question, why do you choose UTF8 on your Target database and not AL32UTF8 ?
    AL32UTF8 is recommended for Unicode uses.
    Hope this help.
    Best regards,
    Jean-Valentin

Maybe you are looking for

  • Analog-Digital Converter not visible as Source

    Hello Forum, I run a ADVC 110 Converter from Grass Valley (Canopus) to convert my old VHS to digital videos. It is connected via Firewire and works fine on my W8.1 computer. The converter is visible in Windows Movie Maker and works, but I can not mak

  • Print tiling in Acrobat Pro 7

    I have an oversize document  110cm x 55cm I want to tile onto A4 paper on my Deskjet 730c. Unfortunately, the page scaleing "tile large pages" option is greyed out and can't be selected. Why? HJ

  • How to Set Portal Single Sign-On to cFolder

    Hi, We have configured cFolder to work with SRM ABAP (D11) and Portal (DJ1) and it's working fine. SSO also configured from Portal to SRM ABAP, ECC, and BW as backend withtype UIPWD. But SSO to cFolder doesn't work. Creating a system with alias SAP_C

  • How to add table data to an MSS PCR form?

    Hi All, I am new to Adobe Interactive Forms and I need to add a table of information to an existing form. This form is a customized version of an MSS PCR form Request for Termination. Currently the form fields are populated by an implementation of BA

  • DELETING IOS APPS

    DELETEING IOS APPS NEED HELP WITH THIS ISSUE