RSBATCHDATA  table in BI increasing very fast

Hi All,
Our BI Production server is installed on Windows 2003 with MAXDB as DB and SP level of 15.
In out DB, RSBATCHDATA table is increasing very fast.
Is there any way to reduce the size of this RSBATCHDATA table in the DB.
SAP Note checked : 1292051
Any Suggestion is welcome.
Regards,
Sharib Tasneem

Hi Naveed/Jaun,
I have used the TX RSBATCH, and the I selected to delete Msgs older than 7 days but it executed for 80 seconds.
And under setting for "parallel processing" I have kept "Select Process" empty.
Before running this the Size if RSBATCHDATA was 11GB and now also the size is same after executing the job.
Is there any thing else to be specified?
Any suggestion is welcome.
Regards,
Sharib Tasneem

Similar Messages

  • BPM data increase very fast and want to get suggestion about BPM capacity

    Dear BPM Experts:
    I meet a problem with BPM capacity problem. My customer using BPM 11g and every day they
    Have 1000 new process,every process have 20-30 tasks,they find the data increase very fast,about 1G/day.
    We have done a test about BPM capacity, I create a new simple process named simpleProcess.
    which only have three input field, I use API to initiate the task and submit to the next
    person.
    we using dev_soainfra tablespace, and we set the default audit level, after insert 5000 task, we find dev_soainfra is reach 362.375M,
    So as assume 30000 task will using 362*6=2G database spaces,and because in next phases,my customer want
    To push BPM platform to more customers, which means more and more customer will using this platform,so
    I want to ask is it data increase reasonable? Do you have capacity planning guide for BPM 11g? and If I want to reduce
    Lower The data increase, how can we do?
    We have try to turn the audit log off, but it seems useless, it only save 8% spaces.
    Thanks for your help!
    Eric

    It looks like you are writing your data to disk every so often.  For that reason, I recommend making it based on the number of samples you have instead of the time.  With that you can preallocate your arrays with constants going into the shift registers.  You then use Replace Array Subset to update your arrays.  When you write to the file, make sure you go back to overwriting the beginning of your array.  This will greatly reduce the amount of time you spend reallocating memory and will reduce your memory usage.
    There are only two ways to tell somebody thanks: Kudos and Marked Solutions
    Unofficial Forum Rules and Guidelines

  • GVD_SEGSTAT table increasing very fast

    Hi Gurus,
    Our GVD_SEGSTAT table contains more than 900 million record and now increasing 30-50 million record per day. But why? I found two notes (867162 and 1080813), but I think those notes are not contains my answers, just a solution for this symptom.
    Can you help me, what is the reason of the accelerate of the table growing?
    Thanks for your helps!

    Looks like there is some tuning to be done in your oracle instance with regard to statistics collection and online backup which could be the reason why snapshots of statistics are getting collected in your tables...
    not sure if tuning alone is the reaosn but mayb e the statistics for backups are not being overwritten with the latest version but being stored as new records...?
    guessing ... not sure if I am anywhere near the answer...
    The best person to answer this would be a DBA...
    Edited by: Arun Varadarajan on May 7, 2009 8:50 AM

  • Disk Size Increasing very Fast

    I am facing very critical issue, disk size, where E2k13 is installed reducing its free space daily near 1 GB and on other hand database file is not taking too much size over the Disk. please suggest me some good option to sort it .
    BRAT

    Hi,
    Based on my knowledge, circular logging is not recommended in a normal Exchange production environment. Also, enabling circular logging is not a long-term option.
    I recommend you disable it and do a full backup to solve your issue.
    For more information, here is a thread for your reference.
    enable circular logging (Note: Though it is Exchange 2010, I think it also applies to Exchange 2013 about this issue)
    http://social.technet.microsoft.com/Forums/en-US/a01579af-8cdc-40d3-aef4-b5f569833553/enable-circular-logging?forum=exchange2010
    Hope it helps.
    Best regards,
    Amy
    Amy Wang
    TechNet Community Support

  • Deleting RSBATCHDATA table

    Dear Expert,
    I have deleted the Msgs older than 30 days by using RSBATCH transaction.
    But the size of the table doesnot reduce as before.
    What can I do to reduce the table size after deleting the RSBATCHDATA tables.
    Regards,
    Rajesh Behera

    Hello Rajesh,
    I think you have very less data created in last 30 days. That is the reason you did not see significant reduction is table RSBATCHDATA size.
    You can take a look at following link.
    RSBATCHDATA  table in BI increasing very fast
    Thanks,
    Siva Kumar

  • Very fast growing STDERR# File

    Hi experts,
    I have stderr# files on two app-servers, which are growing very fast.
    Problem is, I can't open the files via ST11 as they are to big.
    Is there a guide, which explains what is it about and how I can manage this file (reset, ...)?
    May it be a locking-log?
    As I have a few entries in SM21 about failed locking.
    I also can find entries about "call recv failed" and "comm error, cpic return code 020".
    Thx in advance

    Dear Christian,
    Stderr* are used to record syslog and logon check, when the system is up, there should be only one being used, you can delete the others. for example, if the stderr1 is being used, then you can delete the stderr0.
    stderr2,stderr3... Otherwise only shutting down the application server will allow deletion. When deleted the files will be created
    again and only increase in size if the original issue causing it still exists, switching is internal and not controlled by size.
    Some causes of 'stderr4' growth:
    In the case of repeated input/output errors of a TemSe object (in particular in the background), large portions of trace information are written to stderr. This information is not necessary and not useful in this quantity.
    Please review carefully following Notes :
       48400 : Reorganization of TemSe and Spool
      (here delete old 'temse' objects)
    RSPO0041 (or RSPO1041), RSBTCDEL: To delete old TemSe objects
    RSPO1043 and RSTS0020 for the consistency check.
    1140307 : STDERR1 or STDERR3 becomes unusually large
    Please also run a Consistency Check of DB Tables as follows:
    1. Run Transaction SM65
    2. Select Goto ... Additional tests
    3. Select "Consistency check DB Tables" and click execute.
    4. Once you get the results check to see if you have any inconsistencies
       in any of your tables.
    5. If there are any inconsistencies reported then run the "Background
       Procesing Analyses" (SM65 .. Goto ... Additional Tests) again.
       This time check both the "Consistency check DB Tables" and the
       "Remove Inconsistencies" option.
    6. Run this a couple of times until all inconsistencies are removed from
       the tables.
    Make sure you run this SM65 check when the system is quiet and no other batch jobs are running as this would put a lock on the TBTCO table till it finishes.  This table may be needed by any other batch job that is running or scheduled to run at the time SM65 checks are running.
    Running these jobs daily should ensure that the stderr files do not increase at this rate in the future.
    If the system is running smoothly, these files should not grow very fast, because most of they just record the error information when it happening.
    For more information about stderr please refer to the following note:
       12715: Collective note: problems with SCSA
              (the Note contains the information about what is in the  stderr and how it created).
    Regards,
    Abhishek

  • New HDD Load / Unload Cycle Count increasing extremely fast !

    Hi all
    I just upgrade my Pavilion dv5 HDD. The new model is Hitachi Travelstar 7K500 (HTS725050A9A364). However, I found my new HDD's Load / Unload Cycle Count increasing extremely fast!
    Until now the number of Load / Unload Cycle is 12,127. However, the new HDD only power on 137hours. (About 1 week of my use.)  The data below is my new HDD's SMART data (by EVEREST 5.50):
    ID   
    01    Raw Read Error Rate    62    100    100    0   
    02    Throughput Performance    40    100    100    0   
    03    Spinup Time    33    159    159    2  
    04    Start/Stop Count    0    100    100    14   
    05    Reallocated Sector Count    5    100    100    0  
    07    Seek Error Rate    67    100    100    0  
    08    Seek Time Performance    40    100    100    0  
    09    Power-On Time Count    0    100    100    137  
    0A    Spinup Retry Count    60    100    100    0   
    0C    Power Cycle Count    0    100    100    14  
    BF    Mechanical Shock    0    100    100    0   
    C0    Power-Off Retract Count    0    100    100    1 
    C1    Load/Unload Cycle Count    0    99    99    12127   
    C2    Temperature    0    152    152    19, 36  
    C4    Reallocation Event Count    0    100    100    0   
    C5    Current Pending Sector Count    0    100    100    0  
    C6    Offline Uncorrectable Sector Count    0    100    100    0  
    C7    Ultra ATA CRC Error Rate    0    200    200    0  
    DF    Load/Unload Retry Count    0    100    100    0   
    I'm pretty sure that the data is correct, because I can hear the HDD's Load / Unload sound very frequently. The C1 row increase about 2,000 per day.  However my previous Hitachi 5K320 has no problem. The running operating system are both Windows 7.
    I'm so worry about this. As you know, laptop HDD's L / UL Cycle is designed at 600,000. If my HDD continue increasing like this, it will reach this number in a very short time. Any one can help me?

    It's a feature of modern 2.5" harddisks , the disk parks the head when its been inactive for a while, to avoid damage , uses less power , and reduces the heat etc gives the harddisk a better looking spec sheet , its meant to park its head after being inactive for a long time around an hour with proper management from the bios/os of the head parking feature, because there is nothing to manage this feature in hp's bios or os the disk parks the head about every minute for no reason then after 1 second the head goes back on the disk thats 1 load/unload cycle , it also effects performance as the head has to find its place back on the disk , load times will decrease by alot , videos will stutter the first few secs, i have the same problem but not as bad after 130 hours i had a count of around 1,250 load/unloads , different brands of disk have different idle timers plus i use utorrent which stops the harddisk from idling so much, there are a couple of solutions, you can find a tool to update/mod the harddisk firmware to increase the idle timer ( i decided against this as you can permently brake the harddrive and void the warrenty) , contact hp and request a bios update with proper management or the harddisk head parking feature i.e tell the harddisk it is idle after an hour, the temperary solutions are download hd tune and run it all the time this stops the harddisk from parking its head as it doesn't let it go idle , or the solution i am using at the moment download a programme called HDD scan , everytime you turn on/off  you have to run the prog go to tasks/features/ide features set the advance power management from its default value to 254 then press set, no more unnessary load/unloads, downside is the disk runs a couple of 0c hotter mine still never goes above 40c even under full load thanks to a active cooling pad, also its less well protected against shock.

  • WWV_FLOW_DATA growing very fast

    Hi,
    We have a public application and we see wwv_flow_data growing very very fast (Up to 5Gb now).
    In a way, this is a good sign ;) this means that we have a lot of hits... but we are also starting to see some contention on that table.
    It would be nice to be able to set a purge sessions for public (nobody) sessions and another purge sessions for connected sessions.
    We have some people that have to be connected all day , so we cannot purge sessions that are younger than 10 hours.
    Is there another way to limit the number of records in wwv_flow_data than using wwv_flow_cache.purge_sessions(p_purge_sess_older_then_hrs => 24); ?
    Thanks
    Francis Mignault
    http://insum-apex.blogspot.com/
    http://www.insum.ca

    In /f?p=4050:65 Apex report, I can see the sessions and users , anyway that I could use that to delete the records ?No, it doesn't let you select by user name.
    You can login to the workspace, though, and navigate to:
    Home>Administration>Manage Services>Manage Session State>Recent Sessions>Session Details
    Here you can remove a session one-by-one. But that's probably too tedious.
    Scott

  • PSAPSR3 Tablespace is only growing very fast in PROD

    Dear All,
    In our Prod Server  -> PSAPSR3 Tablespace is only growing very fast (Note : with 5 days i have extened 2 time PSAPSR3 table space) .
    let me know the permament solution is only extending table space ? or any alternate solution to control specific table space growth ?
    pls check DB02 Table space details :
    PSAPSR3     219,640.00     10,010.81     95     YES     220,000.00     10,370.81     95     22     157,305     226,884     ONLINE     PERMANENT
    PSAPSR3700     71,120.00     3,506.75     95     YES     170,000.00     102,386.75     40     17     868     11,389     ONLINE     PERMANENT
    PSAPSR3USR     20.00     1.94     90     YES     10,000.00     9,981.94     0     1     38     108     ONLINE     PERMANENT
    PSAPTEMP     4,260.00     4,260.00     0     YES     10,000.00     10,000.00     0     1     0     0     ONLINE     TEMPORARY
    PSAPUNDO     10,000.00     8,391.44     16     NO     10,000.00     8,391.44     16     1     20     498     ONLINE     UNDO
    SYSAUX     480.00     22.88     95     YES     10,000.00     9,542.88     5     1     991     2,633     ONLINE     PERMANENT
    SYSTEM     880.00     5.44     99     YES     10,000.00     9,125.44     9     1     1,212     2,835     ONLINE     PERMANENT
    Kindly advise

    Dear MHO/Sunil/Eric,
    still the PSAPSR3 tablespace keep on growing ,
    Pls check the DB02 ,segments details .
    SAPSR3     BALDAT          TABLE     PSAPSR3     42,622.000     268.800     853     5,455,616
    SAPSR3     SYS_LOB0000072694C00007$$          LOBSEGMENT     PSAPSR3     5,914.000     191.533     277     756,992
    SAPSR3     CDCLS          TABLE     PSAPSR3     9,091.000     38.400     327     1,163,648
    SAPSR3     SYS_LOB0000082646C00006$$          LOBSEGMENT     PSAPSR3     1,664.000     37.067     209     212,992
    SAPSR3     BALDAT~0          INDEX     PSAPSR3     5,049.000     32.000     266     646,272
    SAPSR3     EDI40          TABLE     PSAPSR3     3,155.000     23.467     233     403,840
    SAPSR3     CDCLS~0          INDEX     PSAPSR3     1,965.000     19.200     214     251,520
    SAPSR3     BDCP2~001          INDEX     PSAPSR3     1,543.000     18.400     208     197,504
    SAPSR3     BDCPS~1          INDEX     PSAPSR3     4,039.000     17.067     247     516,992
    SAPSR3     APQD          TABLE     PSAPSR3     1,671.000     17.067     210     213,888
    SAPSR3     CDHDR~0          INDEX     PSAPSR3     2,183.000     12.800     218     279,424
    SAPSR3     CDHDR          TABLE     PSAPSR3     2,305.000     12.800     220     295,040
    SAPSR3     BDCP2~0          INDEX     PSAPSR3     1,000.000     12.533     196     128,000
    SAPSR3     ZBIPRICING~0          INDEX     PSAPSR3     320.000     10.600     111     40,960
    SAPSR3     WRPL          TABLE     PSAPSR3     288.000     8.700     107     36,864
    SAPSR3     FAGL_SPLINFO          TABLE     PSAPSR3     1,016.000     8.000     198     130,048
    SAPSR3     FAGL_SPLINFO_VAL~0          INDEX     PSAPSR3     736.000     8.000     163     94,208
    SAPSR3     ZBIPRICING          TABLE     PSAPSR3     208.000     6.931     97     26,624
    SAPSR3     MARC~Y          INDEX     PSAPSR3     176.000     5.533     93     22,528
    SYS     WRH$_ACTIVE_SESSION_HISTORY     WRH$_ACTIVE_2349179954_18942     TABLE PARTITION     SYSAUX     6.000     5.375     21     768
    SAPSR3     MARC~VBM          INDEX     PSAPSR3     152.000     4.867     90     19,456
    SAPSR3     MARC~D          INDEX     PSAPSR3     136.000     4.367     88     17,408
    SAPSR3     FAGLFLEXA          TABLE     PSAPSR3     2,052.000     4.267     216     262,656
    SAPSR3     RFBLG          TABLE     PSAPSR3     3,200.000     4.267     233     409,600
    SAPSR3     BDCPS          TABLE     PSAPSR3     1,280.000     4.267     203     163,840
    SAPSR3     BDCP~POS          INDEX     PSAPSR3     3,392.000     4.267     236     434,176
    SAPSR3     BALHDR          TABLE     PSAPSR3     864.000     4.000     179     110,592
    SAPSR3     FAGL_SPLINFO~0          INDEX     PSAPSR3     361.000     3.767     117     46,208
    SAPSR3     ACCTIT          TABLE     PSAPSR3     289.000     3.733     108     36,992
    SAPSR3     WRPT~0          INDEX     PSAPSR3     112.000     3.731     85     14,336
    SAPSR3     FAGL_SPLINFO_VAL          TABLE     PSAPSR3     448.000     3.467     127     57,344
    SAPSR3     COEJ          TABLE     PSAPSR3     1,089.000     3.200     201     139,392
    SAPSR3     ZBISALEDATA3          TABLE     PSAPSR3     176.000     3.200     93     22,528
    SAPSR3     COEP~1          INDEX     PSAPSR3     927.000     3.167     187     118,656
    SAPSR3     GLPCP          TABLE     PSAPSR3     891.000     2.933     183     114,048
    SAPSR3     ZBISALEDATA          TABLE     PSAPSR3     376.000     2.933     118     48,128
    SAPSR3     WBBP          TABLE     PSAPSR3     344.000     2.933     114     44,032
    SYS     WRH$_ACTIVE_SESSION_HISTORY     WRH$_ACTIVE_2349179954_18918     TABLE PARTITION     SYSAUX     6.000     2.594     21     768
    SAPSR3     FAGL_SPLINFO~1          INDEX     PSAPSR3     280.000     2.400     106     35,840
    SAPSR3     SE16N_CD_DATA          TABLE     PSAPSR3     72.000     2.333     80     9,216
    SAPSR3     KONH          TABLE     PSAPSR3     1,373.000     2.133     207     175,744
    SAPSR3     GLPCA          TABLE     PSAPSR3     2,437.000     2.133     222     311,936
    SAPSR3     BDCP~0          INDEX     PSAPSR3     1,863.000     2.133     213     238,464
    SAPSR3     SYS_LOB0000161775C00013$$          LOBSEGMENT     PSAPSR3700     5,210.000     2.133     266     666,880
    SAPSR3     BDCPS~0          INDEX     PSAPSR3     2,496.000     2.133     222     319,488
    SAPSR3     D010TAB          TABLE     PSAPSR3700     2,176.000     2.133     217     278,528
    SAPSR3     COEP          TABLE     PSAPSR3     2,117.000     2.133     217     270,976
    SAPSR3     FAGLFLEXA~0          INDEX     PSAPSR3     808.000     2.133     172     103,424
    SAPSR3     BSIS          TABLE     PSAPSR3     1,734.000     2.133     211     221,952
    SAPSR3     BSAS          TABLE     PSAPSR3     1,650.000     2.133     210     211,200
    SAPSR3     GLPCA~3          INDEX     PSAPSR3     382.000     1.867     119     48,896
    SAPSR3     BKPF          TABLE     PSAPSR3     1,012.000     1.867     198     129,536
    SAPSR3     FAGLFLEXA~3          INDEX     PSAPSR3     744.000     1.867     164     95,232
    SAPSR3     FAGLFLEXA~2          INDEX     PSAPSR3     661.000     1.867     154     84,608
    SAPSR3     WRPL~001          INDEX     PSAPSR3     112.000     1.867     85     14,336
    SAPSR3     WRPL~0          INDEX     PSAPSR3     112.000     1.667     85     14,336
    SAPSR3     PCL2          TABLE     PSAPSR3     1,000.000     1.600     196     128,000
    SAPSR3     GLPCA~2          INDEX     PSAPSR3     345.000     1.600     115     44,160
    SAPSR3     FAGL_SPLINFO~3          INDEX     PSAPSR3     136.000     1.600     88     17,408
    SAPSR3     MARC~WRK          INDEX     PSAPSR3     160.000     1.600     91     20,480
    SAPSR3     MSEG          TABLE     PSAPSR3     136.000     1.600     88     17,408
    SAPSR3     ZBISALEDATA~0          INDEX     PSAPSR3     208.000     1.600     97     26,624
    SAPSR3     ZBISALEDATA3~0          INDEX     PSAPSR3     195.000     1.500     96     24,960
    SYS     WRH$_ACTIVE_SESSION_HISTORY     WRH$_ACTIVE_2349179954_18894     TABLE PARTITION
    Kindly suggest

  • CAS Content lib growing very fast!! HELP.

    Hello guys!!
    The "SCCMContentLib" at CAS in my SCCM 2012 R2 was growing very fast! In 15 minutes increased 3GB!!
    Anyone help me?
    Thanks!!
    Atenciosamente Julio Araujo

    Is SP0 your CAS? It looks like the package is created there. You can read more about Content Library here:
    http://technet.microsoft.com/en-us/library/gg682083.aspx#BKMK_ContentLibrary and here
    http://technet.microsoft.com/en-us/library/gg682083.aspx I would also like to suggest
    https://social.technet.microsoft.com/Forums/en-US/de323e04-7bff-4d28-b76e-b4ab4c52cf4b/sccmcontentlib-on-cas?forum=configmanagerdeployment
    Tim Nilimaa-Svärd | Blog: http://infoworks.tv | Twitter: @timnilimaa

  • Upload to Usb-Storage is very fast, that's why i can't umount!

    Hi everyon
    If i upload some stuff to an usb-storage it's like its uploading very very fast, like 100mb in 1 second, but in background it's still uploading(i can't umount). The problem is every program is showing me that the stuff is uploaded in 1-2 second. I have to wait everytime, but i don't know how long, and that's really annoying.
    How can i fix it ?
    /etc/fstab
    /dev/sdb /media/sdb vfat rw,users,umask=000,uid=fatih 0 0
    dmesg
    usb 1-1: USB disconnect, address 4
    usb 1-1: new full speed USB device using uhci_hcd and address 5
    usb 1-1: configuration #1 chosen from 1 choice
    scsi5 : SCSI emulation for USB Mass Storage devices
    usb-storage: device found at 5
    usb-storage: waiting for device to settle before scanning
    scsi 5:0:0:0: Direct-Access Nokia N95 1.0 PQ: 0 ANSI: 0
    sd 5:0:0:0: [sdb] 1000215 512-byte hardware sectors (512 MB)
    sd 5:0:0:0: [sdb] Write Protect is off
    sd 5:0:0:0: [sdb] Mode Sense: 03 00 00 00
    sd 5:0:0:0: [sdb] Assuming drive cache: write through
    sd 5:0:0:0: [sdb] 1000215 512-byte hardware sectors (512 MB)
    sd 5:0:0:0: [sdb] Write Protect is off
    sd 5:0:0:0: [sdb] Mode Sense: 03 00 00 00
    sd 5:0:0:0: [sdb] Assuming drive cache: write through
    sdb: unknown partition table
    sd 5:0:0:0: [sdb] Attached SCSI removable disk
    sd 5:0:0:0: Attached scsi generic sg2 type 0
    usb-storage: device scan complete
    usb 1-1: USB disconnect, address 5
    NOTE: I've tried it on Ubuntu, and there wasn't any problem.

    But i don't know how long i have to wait.
    Well, thats kind of the thing... It could be forever. The kernel/filesystem might decide to halt copying until you decide to sync _or_ add more files, so it can make all the pieces in the puzzle (partition) fit without defragmentation. See?
    If you decide to mount with sync, you get a steady stream transfer to your device, and you will theoretically see how long the task will take. (am i mistaken?)
    If you decide to go with nosync (default behaviour), this is not possible. But the benefit would be less defragmentation.
    I guess that i would be possible to have the system tell you how long the actual "sync" manouver will take when you unmount, but as far as i know this is not implemented yet. Maybe you could file a feature request somewhere? (i dont know if the HAL scripts or notification-daemon is reponsible for the actual "information" you recieve)

  • Database data file growing very fast

    Hi
    I have a database that runs on SQL server 2000.
    A few months back, the database was shifted to new server because the old server was crash.
    There was no issue in old server which was used more than 10 years.
    I noticed that the data file was growing very fast since the database was shifted to new server.
    When I run "sp_spaceused", a lot of space are unused. Below is the result:
    database size = 50950.81 MB
    unallocated space = 14.44 MB
    reserved = 52048960 KB
    data = 9502168 KB
    index size = 85408 KB
    unused = 42461384 KB
    When I run "sp_spacedused" only for one big table, the result is:
    reserved = 19115904 KB
    data = 4241992 KB
    index size = 104 KB
    unused = 14873808 KB
    I had shrink the database and the size didn't reduce.
    May I know how to reduce the size? Thanks.

    Hallo Thu,
    can you check whether you have active Jobs in Microsoft SQL Server Agent which may...
    rebuild Indexes?
    run maintenance Jobs of your application?
    I'm quite confident that index maintenance will cause the "growth".
    Shrinking the database is...
    useless and
    nonsence
    if you have index maintenance Tasks. Shrinking the database means the move of data pages from the very end of the database to the first free part in the database file(s). This will cause index fragmentation.
    If the nightly index maintenance Job will rebuild the Indexes it uses NEW space in the database for the allocation of space for the data pages!
    Read the blog post from Paul Randal about it here:
    http://www.sqlskills.com/blogs/paul/why-you-should-not-shrink-your-data-files/
    MCM - SQL Server 2008
    MCSE - SQL Server 2012
    db Berater GmbH
    SQL Server Blog (german only)

  • My iPhone 5s button home is not work and battery draining very fast

    is buy new iphone about 1 month in true shop in thailand
    1.is use about 15 days have a problem battery draining very fast *    16/5/2014
    i go to shop true but shop tell is not problem
    2.bluetooth is not compair with my car
    but in the first time i can connect bluetooth. But 1. shop true restore my device after then i can not connect this
    3.the button home i cannot used    25/5/2014
    i go to true shop again
    - tell to me this shop not receive becasue the phone have scratch and i want to change i pay 9800 bath about 326 us dollar
    BUT  i seach information scratch about 4mm. right
    *1 min 1 per*
    i hope apple help customer i buy product apple a lot of because i thust your brand but this case im not sure
    plese reply is mail as soon as im wait your answer....

    12 hours i don't have a answer.... this time my phone not used

  •  hello actually I have purchased my iPod touch 4 generation 1 month back.initially I find that it's battery life is superb.it works for 5 -6 days in standby and 8-9 hrs usage time but from past 2days its battery list draining very fast I have to charge i

     hello actually I have purchased my iPod touch 4 generation 1 month back.initially I find that it's battery life is superb.it works for 5 -6 days in standby and 8-9 hrs usage time but from past 2days its battery is draining very fast I have to charge it twice a day.i m using iOS 6.0.1.plz suggest me what to do??its just 1 month old ..I have spend lot of money to buy it, I don't want it like that..plz reply.....

    Try:
    - Reset the iOS device. Nothing will be lost
    Reset iOS device: Hold down the On/Off button and the Home button at the same time for at
    least ten seconds, until the Apple logo appears.
    - Then see if placing the iPod in airplane mode when in standby helps.
    - Reset all settings
    Go to Settings > General > Reset and tap Reset All Settings.
    All your preferences and settings are reset. Information (such as contacts and calendars) and media (such as songs and videos) aren’t affected.
    - Restore from backup. See:
    iOS: How to back up
    - Restore to factory settings/new iOS device.
    -  Make an appointment at the Genius Bar of an Apple store.
    Apple Retail Store - Genius Bar

  • After upgrade to IOS 6 my iphone 4S is heating up a lot and the battery is getting depleted very fast (within 2-3 hrs).

    I have upgraded my Iphone 4S to IOS 6.0 Yesterday. Afterwards I noticed that the phone gets hot quite often, even when it is idle. Additionally the battery gets depleted very fast ( 100% to 20% within 3 hrs while I was reading a book on the app mobibook for about 30 min). Last night, I left the phone connected to the charger, as I normally do, I found the phone to be extremely hot in the morning. Any help will be appreciated. In case there is no rememdy I would like to rollback to 5.1.1.

    If your iPhone 4S is extremely hot, do the following:
    Turn off your iPhone 4S and wait for 1 - 1 1/2 hours.
    Turn on your iPhone 4S.
    If the problem occurs, turn off your iPhone 4S and wait for 2 - 3 hours.
    Turn on your iPhone 4S.
    If your iPhone 4S is getting low, charge your phone
    Repeat in step 1.

Maybe you are looking for

  • [SOLVED] Unable to boot into fresh install (UEFI, gummiboot)

    Hello to you all, I installed arch linux on my new Lenovo E130 following Beginner's guide. After install, when I rebooted the system I'm dropped into emergency shell. Whole log: :: running early hook [udev] :: running hook [udev] :: Triggering uevent

  • User/Library/Safari/bookmarks.plist sit

    I wanted to restore my old bookmarks file, so I went to t/m and from there to user/Library/Safari/bookmarks.plist and hit on restore, took a look at the operation transpiring, thought I saw 13.7 mbs of transfer and went to put on the coffee. when I c

  • Table from which all requests details report can be extracted

    Hi to all I would like to know from which table we can have a look on all request details which are in quality but not in production i want move prd data to quality if i does this all request are getting vanish in quality Regards

  • Can't get "Office" to load

    MAC OS/X  WON'T LET ME USE THE TWO INSTALL DISCS IT CAME WITH TO RE-INSTALL "WORD", "GARAGE BAND", ET CETERA.  H E L P!!!

  • Toplink 9.0.4.5 with java 5

    Hi I'm wondering how toplink 9.0.4.5. copes with java 5. since its shipped with jre 1.4. What enabling generics features can do to toplink workbench - for example. Br PF