Replacement takes too long about 3 months

Is ipad replacement takes too long for about 3 months? And every month sending you a message saying the unit has arrived and yet there is still a factory defect it's like obviously they're making an excuse over again. shoud I asked for an incentive for waiting too long by the way im from Philippines.

In my part its unfair because I paid that for cash and I just used it for only 1 week and obviously its a factory defect how come they cannot replaced it right.away??? Ive been a using apple products since 2003 or 2004 if im not mistaken those were the days first generation of iPod shuffle. Its very sad. (Sigh)!

Similar Messages

  • My Query takes too long ...

    Hi ,
    Env   , DB 10G , O/S Linux Redhat , My DB size is about 80G
    My query takes too long ,  about 5 days to get results , can you please help to rewrite this query in a better way ,
    declare
    x number;
    y date;
    START_DATE DATE;
    MDN VARCHAR2(12);
    TOPUP VARCHAR2(50);
    begin
    for first_bundle in
    select min(date_time_of_event) date_time_of_event ,account_identifier  ,top_up_profile_name
    from bundlepur
    where account_profile='Basic'
    AND account_identifier='665004664'
    and in_service_result_indicator=0
    and network_cause_result_indicator=0
    and   DATE_TIME_OF_EVENT >= to_date('16/07/2013','dd/mm/yyyy')
    group by account_identifier,top_up_profile_name
    order by date_time_of_event
    loop
    select sum(units_per_tariff_rum2) ,max(date_time_of_event)
    into x,y
    from OLD_LTE_CDR
    where account_identifier=(select first_bundle.account_identifier from dual)
    and date_time_of_event >= (select first_bundle.date_time_of_event from dual)
    and -- no more than a month
    date_time_of_event < ( select add_months(first_bundle.date_time_of_event,1) from dual)
    and -- finished his bundle then buy a new one
      date_time_of_event < ( SELECT MIN(DATE_TIME_OF_EVENT)
                             FROM OLD_LTE_CDR
                             WHERE DATE_TIME_OF_EVENT > (select (first_bundle.date_time_of_event)+1/24 from dual)
                             AND IN_SERVICE_RESULT_INDICATOR=26);
    select first_bundle.account_identifier ,first_bundle.top_up_profile_name
    ,FIRST_BUNDLE.date_time_of_event
    INTO MDN,TOPUP,START_DATE
    from dual;
    insert into consumed1 VALUES(X,topup,MDN,START_DATE,Y);
    end loop;
    COMMIT;
    end;

    > where account_identifier=(select first_bundle.account_identifier from dual)
    Why are you doing this?  It's a completely unnecessary subquery.
    Just do this:
    where account_identifier = first_bundle.account_identifier
    Same for all your other FROM DUAL subqueries.  Get rid of them.
    More importantly, don't use a cursor for loop.  Just write one big INSERT statement that does what you want.

  • OPM process execution process parameters takes too long time to complete

    PROCESS_PARAMETERS are inserted every 15 min. using gme_api_pub packages. some times it takes too long time to complete the batch ,ie completion of request. it takes about 5-6 hrs long time ,in other time s it takes only 15-20 mins.This happens at regular interval...if anybody can guide me I will be thankful to him/her..
    thanks in advance.
    regds,
    Shailesh

    Generally the slowest part of the process is in the extraction itself...
    Check in your source system and see how long the processes are taking, if there are delays, locks or dumps in the database... If your source is R/3 or ECC transactions like SM37, SM21, ST22 can help monitor this activity...
    Consider running less processes in parallel if you have too many and see some delays in jobs... Also indexing some of the tables in the source system to expedite the extraction, make sure there are no heavy processes or interfaces running in the source system at the same time you're trying to load... Check with your Basis guys for activity peaks and plan accordingly...
    In BW also check in your SM21 for database errors or delays...
    Just some ideas...

  • Web application deployment takes too long?

    Hi All,
    We have a wls 10.3.5 clustering environment with one admin server and two managered servers separately. When we try to deploy a sizable web application, it takes about 1 hour to finish. It seems that it takes too long to finish the deployment. Here is the output from one of two managerd server system log. Could anyone tell me it is normal or not? If not, how can I improve this?
    Thanks in advance,
    John
    +####<Feb 29, 2012 12:11:03 PM EST> <Info> <Deployer> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1330535463373> <BEA-149059> <Module copyrequest of application copyrequest [Version=COPYREQUEST0002bb] is transitioning from STATE_NEW to STATE_PREPARED on server Pinellas1tMS3.>+
    +####<Feb 29, 2012 12:11:05 PM EST> <Info> <Deployer> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <9baa7a67b5727417:26f76f6c:135ca05cff2:-8000-00000000000000b0> <1330535465664> <BEA-149060> <Module copyrequest of application copyrequest [Version=COPYREQUEST0002bb] successfully transitioned from STATE_NEW to STATE_PREPARED on server Pinellas1tMS3.>+
    +####<Feb 29, 2012 12:11:06 PM EST> <Info> <Deployer> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1330535466493> <BEA-149059> <Module copyrequest of application copyrequest [Version=COPYREQUEST0002bb] is transitioning from STATE_PREPARED to STATE_ADMIN on server Pinellas1tMS3.>+
    +####<Feb 29, 2012 12:11:06 PM EST> <Info> <Deployer> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1330535466493> <BEA-149060> <Module copyrequest of application copyrequest [Version=COPYREQUEST0002bb] successfully transitioned from STATE_PREPARED to STATE_ADMIN on server Pinellas1tMS3.>+
    +####<Feb 29, 2012 12:11:06 PM EST> <Info> <Deployer> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1330535466809> <BEA-149059> <Module copyrequest of application copyrequest [Version=COPYREQUEST0002bb] is transitioning from STATE_ADMIN to STATE_ACTIVE on server Pinellas1tMS3.>+
    +####<Feb 29, 2012 12:11:06 PM EST> <Info> <Deployer> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1330535466809> <BEA-149060> <Module copyrequest of application copyrequest [Version=COPYREQUEST0002bb] successfully transitioned from STATE_ADMIN to STATE_ACTIVE on server Pinellas1tMS3.>+
    +####<Feb 29, 2012 1:00:42 PM EST> <Info> <Diagnostics> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1330538442300> <BEA-320143> <Scheduled 1 data retirement tasks as per configuration.>+
    +####<Feb 29, 2012 1:00:42 PM EST> <Info> <Diagnostics> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <[ACTIVE] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1330538442301> <BEA-320144> <Size based data retirement operation started on archive HarvestedDataArchive>+
    +####<Feb 29, 2012 1:00:42 PM EST> <Info> <Diagnostics> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <[ACTIVE] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1330538442301> <BEA-320145> <Size based data retirement operation completed on archive HarvestedDataArchive. Retired 0 records in 0 ms.>+
    +####<Feb 29, 2012 1:00:42 PM EST> <Info> <Diagnostics> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <[ACTIVE] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1330538442301> <BEA-320144> <Size based data retirement operation started on archive EventsDataArchive>+
    +####<Feb 29, 2012 1:00:42 PM EST> <Info> <Diagnostics> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <[ACTIVE] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1330538442301> <BEA-320145> <Size based data retirement operation completed on archive EventsDataArchive. Retired 0 records in 0 ms.>+
    +####<Feb 29, 2012 1:10:23 PM EST> <Info> <Cluster> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <weblogic.cluster.MessageReceiver> <<WLS Kernel>> <> <> <1330539023098> <BEA-003107> <Lost 2 unicast message(s).>+
    +####<Feb 29, 2012 1:10:36 PM EST> <Info> <Cluster> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <[ACTIVE] ExecuteThread: '2' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1330539036105> <BEA-000111> <Adding Pinellas1tMS2 with ID -9071779833610528123S:entwl2t-vm:[7005,7005,-1,-1,-1,-1,-1]:entwl2t-vm:7005,entwl3t-vm:7007:Pinellas1tDomain:Pinellas1tMS2 to cluster: Pinellas1tCluster1 view.>+
    +####<Feb 29, 2012 1:11:24 PM EST> <Info> <Cluster> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <[STANDBY] ExecuteThread: '3' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1330539084375> <BEA-000128> <Updating -9071779833610528123S:entwl2t-vm:[7005,7005,-1,-1,-1,-1,-1]:entwl2t-vm:7005,entwl3t-vm:7007:Pinellas1tDomain:Pinellas1tMS2 in the cluster.>+
    +####<Feb 29, 2012 1:11:24 PM EST> <Info> <Cluster> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <[STANDBY] ExecuteThread: '4' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1330539084507> <BEA-000128> <Updating -9071779833610528123S:entwl2t-vm:[7005,7005,-1,-1,-1,-1,-1]:entwl2t-vm:7005,entwl3t-vm:7007:Pinellas1tDomain:Pinellas1tMS2 in the cluster.>+
    Edited by: john wang on Feb 29, 2012 10:36 AM
    Edited by: john wang on Feb 29, 2012 10:37 AM
    Edited by: john wang on Feb 29, 2012 10:38 AM

    Hi John,
    There may be some circumstances like when there are many files in the WEB-INF folder and JPS don't use TLD.
    I don't think a 1hour deployment is normal, it should be much more faster.
    Since you are using 10.3.5, I suggesto you to install the corresponding patch:
    1. Download patch 10118941p10118941_1035_Generic.zip
    2. Uncompress the file p10118941_1035_Generic.zip
    3. Copy the required files (patch-catalog_XXXXX.xml, CIRF.jar ) to the Patch Download Directory (typically, this folder is <WEBLOGIC_HOME>/utils/bsu/cache_dir).
    4. Rename the file patch-catalog_XXXXX.xml into patch-catalog.xml .
    5. Start Smart Update from <WEBLOGIC_HOME>/utils/bsu/bsu.sh .
    6. Select "Work Offline" mode.
    7. Go to File->Preferences, and select "Patch Download Directory".
    8. Click "Manage Patches" on the right panel.
    9. You will see the patch in the panel below (Downloaded Patches)
    10. Click "Apply button" of the downloaded patch to apply it to the target installation and follow the instructions on the screen.
    11. Add "-Dweblogic.jsp.ignoreTLDsProcessingInWebApp=true" to the Java options to ignore additional findTLDs cost.
    12. Restart servers.
    Hope this helps.
    Thanks,
    Cris

  • Query designer takes too long to save a query

    Hi dear SDN friends.
    I´m working with query designer in BI 7 and sometimes it takes too long to save a query, about ten minutes. Sometimes it never ends saving and some other times it saves the same query in 1 minute.
    Can anybody please give an advice about this behavior of the query designer?
    We have recently update BI to sp18. In query designer I have sp5 revision 529
    Best regards,
    Karim Reyes

    Hello Karim,
    I would suggest testing this again in the latest Frontend Patch available (FEP 602).  In FEPs 600, 601, & 602 there were some performance and stability improvements made which may correct your issue.  If the issue persists, I would suggest then opening a Customer Message via Service Marketplace.
    It can be downloaded from:
        http://service.sap.com/swdc
    u2192Download
    u2192Support Packages and Patches
    u2192Entry by Application Group
    u2192SAP Frontend Components
    u2192BI ADDON FOR SAPGUI
    u2192 BI 7.0 ADDON FOR SAPGUI 7.10
    u2192 BI 7.0 ADDON FOR SAPGUI 7.10
    u2192Win32
    See SAP Note 1085218 for planned FEP releases.
    I hope that helps.
    Regards,
    Tanner Spaulding
    SAP NetWeaver RIG Americas, BI

  • Unlike IE When back button is pressed it takes too long. pleas do something for that. thnx.

    Unlike IE When back button is pressed it takes too long. pleas do something for that. I like firefox and i get to use the back button more often.
    thnx

    In order to be able to find the correct solution to your problem, we require some more non-personal information from you. Please do the following:
    *Click the Firefox button at the top left, then click the ''Help'' menu and select ''Troubleshooting Information'' from the submenu. If you don't have a Firefox button, click the Help menu at the top and select ''Troubleshooting Information'' from the menu.
    Now, a new tab containing your troubleshooting information should open.
    *At the top of the page, you should see a button that says "Copy text to clipboard". Click it.
    *Now, go back to your forum post and click inside the reply box. Press Ctrl+V to paste all the information you copied into the forum post.
    If you need further information about the Troubleshooting information page, please read the article [[Use the Troubleshooting Information page to help fix Firefox issues]].
    Thanks in advance for your help!

  • RTF export takes too long

    Hi all,
    you know why a report with a lot of data and many pages (about 300 pages), if I export this in PDF format using short time (1 min) and if I export this in PDF format takes too long (even 10 minutes) and sometimes even times out.
    Have an idea?
    Thanks

    re: Paul.  Not true, or shouldn't be.  XDCAM HD timeline, XDCAM HD output.
    re: Michael.  Well, close.  I mixdown the edited timeline, then place that into another sequence where I apply a Broadcast Safe filter and hit it with a bit of audio compression.  I render that, ridding myself of the dreaded "red lines".  However, when I splat-E, I get red lines all over again while I'm exporting.  When the export is done, the red lines disappear.  Now as an aside, I do experience actual "missing" render files, but that occurs when I quit a project and then re-open it later...they sometimes don't seem to re-link.  As I mentioned in this post, I have a feeling that's because this system is set up in the worst way possible (a single RAID5 which acts as system drive and media storage).
    Don't blame me, I'm just trying to work within the confines of what I'm given. My major concern here is that it seems to be getting worse.

  • Drill Through reports takes too long

    Hi all,
    I need some suggestions/help with our drill through reports. We are using Hyperion 11.1.1.3 and the cube is ASO.
    We have drill through reports set up in Essbase studio for drilling down from Essbase to Oracle database. It takes too long (like 30 mins for fetching 1000 records) and the query is simple.
    What are the changes that we can do to bring down this time. Please advice.
    Thanks.

    Hi Glenn,
    We tried optimizing the drill through SQL query but actually when we run the directly in TOAD it takes *23 secs* but when we do drill through on the same intersection
    it took more than 25 mins. Following is our query structure :
    (SELECT *
    FROM "Table A" cp_594
    INNER JOIN "Table B" cp_595 ON (cp_594.key = cp_595.key)
    WHERE (Upper(cp_595.*"Dim1"*) in (select Upper(CHILD) from (SELECT * FROM DIM_TABLE_1 where CUBE = 'ALL') WHERE CONNECT_BY_ISLEAF = 1 START WITH PARENT = $$Dim1$$ CONNECT BY PRIOR CHILD = PARENT UNION ALL select Upper(CHILD) from DIM_TABLE_1 where CUBE = 'ALL' AND REPLACE('GL_'||CHILD, 'GL_IC_', 'IC_') = $$Dim1$$))
    And ----same for 5 more dimensions
    Can you suggest some improvement ? Please advice.
    Thanks

  • Why finding replication stream matchpoint takes too long

    hi,
    I am using bdb je 5.0.58 HA(two nodes group,JVM 6G for each node).
    Sometimes, I found bdb node takes too long to restart(about 2 hours).
    When this occurs, I catch the process stack of bdb(jvm process) by jstack.
    After analyzing stack,I found "ReplicaFeederSyncup.findMatchpoint()" taking all the time.
    I want to know why this method takes so much time,and how can I avoid this bad case.
    Thanks.
    帖子经 liang_mic编辑过

    Liang,
    2 hours is indeed a huge amount of time for a node restart. It's hard be sure without doing more detailed analysis of your log as to what may be going wrong, but I do wonder if it is related to the problem you reported in outOfMemory error presents when cleaner occurs [#21786]. Perhaps the best approach is for me to describe in more detail what happens when a replicated node is connecting with a new master, which might give you more insight into what is happening in your case.
    The members of a BDB JE HA replication group share the same logical stream of replicated records, where each record is identified with a virtual log sequence number, or VLSN. In other words, the log record described by VLSN x on any node is the same data record, although it may be stored in a physically different place in the log of each node.
    When a replica in a group connects with a master, it must find a common point, the matchpoint, in that replication stream. There are different situations in which a replica may connect with a master. For example, it may have come up and just joined the group. Another case is when the replica is up already but a new master has been elected for the group. One way or another, the replica wants to find the most recent point in its log, which it has in common with the log of the master. Only certain kinds of log entries, tagged with timestamps, are eligible to be used for such a match, and usually, these are transaction commits and aborts.
    Now, in your previous forum posting, you reported an OOME because of a very large transaction, so this syncup issue at first seems like it might be related. Perhaps your replication nodes need to traverse a great many records, in an incomplete transaction, to find the match point. But the syncup code does not blindly traverse all records, it uses the vlsn index metadata to skip to the optimal locations. In this case, even if the last transaction was very long, and incomplete, it should know where the previous transaction end was, and find that location directly, without having to do a scan.
    As a possible related note, I did wonder if something was unusual about your vlsn index metadata. I did not explain this in outOfMemory error presents when cleaner occurs but I later calculated that the transaction which caused the OOME should only have contained 1500 records. I think that you said that you figured out that you were deleting about 15 million records, and you figured out that it was the vlsn index update transaction which was holding many locks. But because the vlsn index does not record every single record, it should only take about 1,500 metadata records in the vlsn index to cover 15 million application data records. It is still a bug in our code to update that many records in a single transaction, but the OOME was surprising, because 1,500 locks shouldn't be catastrophic.
    There are a number of ways to investigate this further.
    - You may want to try using a SyncupProgress listener described at http://docs.oracle.com/cd/E17277_02/html/java/com/sleepycat/je/rep/SyncupProgress.html to get more information on which part of the syncup process is taking a long time.
    - If that confirms that finding the matchpoint is the problem, we have an unadvertised utility, meant for debugging, to examine the vlsn index. The usage is as follows, and you would use the -dumpVLSN option, and run thsi on the replica node. But this would require our assistance to interpret the results. We would be looking for the records that mention where "sync" points are, and would correlate that to the replica's log, and that might give more information if this is indeed the problem, and why the vlsn index was not acting to optimize the search.
    $ java -jar build/lib/je.jar DbStreamVerify
    usage: java { com.sleepycat.je.rep.utilint.DbStreamVerify | -jar je-<version>.jar DbStreamVerify }
    -h <dir> # environment home directory
    -s <hex> # start file
    -e <hex> # end file
    -verifyStream # check that replication stream is ascending
    -dumpVLSN # scan log file for log entries that make up the VLSN index, don't run verify.
    -dumpRepGroup # scan log file for log entries that make up the rep group db, don't run verify.
    -i # show invisible. If true, print invisible entries when running verify mode.
    -v # verbose

  • Why ipad 2 / iphone 3g resetting takes too long?

    Im trying to reset and delete all content of my ipad and iphone 3g by choosing the selections about resetting on the general. Why it takes too long to reset my ipad / iphone 3g? It is almost 2 days my ipad are on the apple logo / looping circle appears on it and still im waiting for my ipad / iphone 3g to return to main screen.  What will i do? Thanks...

    There is no other way except to restore the iPad - plain and simple. You have to restore the device within iTunes. You want to use the same computer that you always sync with so that you can restore your app data and settings. You can restore with any other computer, but you will lose everything on the iPad.
    You will need to use recovery mode
    iPad: Unable to update or restore

  • Having lagging problems with YouTube videos, it simply takes so long for the red bar to completely load and the videos frequently pause and take too long to restart playing again.

    I have been having lagging problems with YouTube videos for a number of months now. It simply takes so long for the red bar to completely load and the videos frequently pause and take too long to restart playing again. Even with little 2 and 3 minute videos.
    I have a fast computer and my webpages load really, really fast. I have FireFox 4 browser and a Vista Home Premium 64-bit OS. So I don't have a slow computer or web browser. But these slow YouTube vids take way too long to load for some reason.
    Does anyone have any idea how I can speed up YouTube?

    Hi
    The forums are customer to customer in the first instance, Only the mods (BT) will ask for personal information through a email link. Would you mind posting your Hub stats & BT speed test results this will help all with diagnosis-
    To post the full stats from your router
    for home hub - 192.168.1.254
    Navigate to ADSL Settings or use the A-Z at the top right
    Click on More Details and then post the results.
    Run BT speed tester and post the results from http://speedtester.bt.com/
    If possible it would be best to connect to the BT master socket this will rule out any telephone/broadband extension wiring also consider the housing of the hub/router as anything electrical can cause problems as these are your responsibility, if these are found to be the case Openreach (engineer) will charge BT Broadband, which will be passed onto you, around £130.00.
    Noisy line! When making telephone calls, if so this is not good for your broadband, you can check-
    Quite line test dial 17070 option 2 and listen - should hear nothing best done with old type analogue phone digital (dect) will do but may have slight hiss If you hear noise- crackling pops etc, report it as a noisy line on your phone, don’t mention broadband, Bt Faults on 151.
    As for your FTTC its available in some areas between 40% & 80% of customers in enabled areas can receive it!
    Mortgage Advisor 2000-2008
    Green Energy Advisor 2008-2010
    Charity Health Care Provider Advisor 2010-
    I'm alright Jack....

  • [SOLVED] initramfs takes too long to load

    Using systemd-analyze I found out that initramfs takes too long to load:
    463ms (kernel) + 11875ms (initramfs) + 6014ms (userspace) = 18353ms
    My HOOKS array in mkinitcpio.conf is the following:
    HOOKS="base udev autodetect modconf block encrypt lvm2 filesystems usbinput fsck"
    I suspect that the long loading time of initramfs is caused by partitions decryption (I am using dm-crypt / LUKS on top of LVM).
    Is there any tool that can report loading times of HOOKS seperately, just like systemd-analyze plot does for userspace?
    Last edited by nasosnik (2013-01-21 14:45:28)

    cfr wrote:
    In what sense is it "too long"?
    I'm just wondering: suppose that you find out that it is because you are using encryption. Would you then switch to a non-encrypted system? Would you make better use of the extra seconds you might save on those rare occasions when you reboot? Even if you reboot twice a day, you might save what? Suppose you would even save 5s per boot. That will give you a whole extra 1 minute and 10 seconds a week. Assuming you don't multitask. Obviously if you multitask, the gain will be less. Would that make it worth risking the security of your data?
    EDIT: I didn't mean this to sound as confrontational as it does now I read it back. It just always puzzles me that people are so concerned about shaving a few milliseconds here and there. I always hope that they put the time they save to good use but then I realise that the time they spent shaving the milliseconds off will obviously outstrip the time saved.
    I really don't care about the boot time because of the reasons you have already mentioned. I just want to figure out if there is any misconfiguration. I am just investigating why initramfs takes significant longer to load compared with my desktop Arch installation (non-encrypted) 1316ms for initramfs. My desktop has a Pentium 4 CPU and laptop has a quad-core i7.
    roentgen wrote:11875ms (initramfs)  means the time it takes you to type the password.
    systemd-analyze is not counting the time is spent to type the password.

  • Sometimes my computer takes too long to connect to new website. I am running a pretty powerful work program at same time, what is the best solution? Upgrading speed from cable network, is it a hard drive issue? do I need to "clean out" the computer?

    Many times my computer takes too long to connect to new website. I have wireless internet (time capsule) and I am running a pretty powerful real time financial work program at same time, what is the best solution? Upgrading speed from cable network? is it a hard drive issue? do I only need to "clean out" the computer? Or all of the above...not to computer saavy.  It is a Macbook Pro  osx 10.6.8 (late 2010).

    Almost certainly none of the above!  Try each of the following in this order:
    Select 'Reset Safari' from the Safari menu.
    Close down Safari;  move <home>/Library/Caches/com.apple.Safari/Cache.db to the trash; restart Safari.
    Change the DNS servers in your network settings to use the OpenDNS servers: 208.67.222.222 and 208.67.220.220
    Turn off DNS pre-fetching by entering the following command in Terminal and restarting Safari:
              defaults write com.apple.safari WebKitDNSPrefetchingEnabled -boolean false

  • Accessing BKPF table takes too long

    Hi,
    Is there another way to have a faster and more optimized sql query that will access the table BKPF? Or other smaller tables that contain the same data?
    I'm using this:
       select bukrs gjahr belnr budat blart
       into corresponding fields of table i_bkpf
       from bkpf
       where bukrs eq pa_bukrs
       and gjahr eq pa_gjahr
       and blart in so_DocTypes
       and monat in so_monat.
    The report is taking too long and is eating up a lot of resources.
    Any helpful advice is highly appreciated. Thanks!

    Hi max,
    I also tried using BUDAT in the where clause of my sql statement, but even that takes too long.
        select bukrs gjahr belnr budat blart monat
         appending corresponding fields of table i_bkpf
         from bkpf
         where bukrs eq pa_bukrs
         and gjahr eq pa_gjahr
         and blart in so_DocTypes
         and budat in so_budat.
    I also tried accessing the table per day, but it didn't worked too...
       while so_budat-low le so_budat-high.
         select bukrs gjahr belnr budat blart monat
         appending corresponding fields of table i_bkpf
         from bkpf
         where bukrs eq pa_bukrs
         and gjahr eq pa_gjahr
         and blart in so_DocTypes
         and budat eq so_budat-low.
         so_budat-low = so_budat-low + 1.
       endwhile.
    I think our BKPF tables contains a very large set of data. Is there any other table besides BKPF where we could get all accounting document numbers in a given period?

  • Report Takes Too Long Time

    Hi!
    I am in troubble
    following is the query
    SELECT inv_no, inv_name, inv_desc, i.cat_id, cat_name, i.sub_cat_id,
    sub_cat_name, asset_cost, del_date, i.bl_id, gen_desc bl_desc, p.prvcode, prvdesc, cur_loc,
    pldesc, i.pmempno, pmname, i.empid, empname
    FROM inv_reg i,
    cat_reg c,
    sub_cat_reg s,
    gen_desc_reg g,
    ploc p,
    province r,
    pmaster m,
    iemp_reg e
    WHERE i.sub_cat_id = s.sub_cat_id
    AND i.cat_id = s.cat_id
    AND s.cat_id = c.cat_id
    AND i.bl_id = g.gen_id
    AND i.cur_loc = p.plcode
    AND p.prvcode = r.prvcode
    AND i.pmempno = m.pmempno(+)
    AND i.empid = e.empid(+)
    &wc
    order by prvdesc, pldesc, cat_name, sub_cat_name, inv_no
    and this query returns 32000 records
    when i run this query on reports 10g
    then it takes 10 to 20 minuts to generate report
    how can i optimize it...?

    Hi Waqas Attari
    Pls study & try this ....
    When your query takes too long ...
    hope it helps....
    Regards,
    Abdetu...

Maybe you are looking for