Deduper

Can Deduper be used on a Mac? Can it also delete the duplicates in your iTunes library but not affecting the playlists? Thanks
I have duplicates of almost every song and sometimes 5 copies of a song...eek, takes too much room on my 128gb ssd drive

Get it here.

Similar Messages

  • Deduper and windows 8

    I'm having trouble using the deduper and windows 8.  I have the latset itunes.  I keep getting an error message
    Line:691
    Char 691
    error invalid prcedure call or argument
    code 800A0005
    source microsoft vbscript runtime error

    Hi. I presume you're referring to my DeDuper script. Checking the live copy of the script the most likely cause of the error would be that the object it is supposed to be working with no longer exists. This can happen if you run the script on a regular playlist that has more than one instance of the same object. I'm working on a new version that will be more robust but until then it would be best to run the script on the main Songs view or a smart playlist as these will only contain one reference to each unique iTunes object.
    If you think something else is happening please let me know.
    tt2

  • How do I run a DeDuper program to clean all of the duplicate files off of my library?

    I see the Deduper program and understand what it does. but how the **** do I run this?
    Its to the point where I cannot add music to anything because I get 10 of the same files on EVERYTHING. It makes Itunes not worth having? I am losing hair right now

    Ask the app developer. That's not an Apple product.

  • Trying to Unoptimize a Dedup Volume - Progress stays at 0%

    For various reasons, I'm trying to rehydrate my dedup volumes on a recently deployed File Server 2012 R2.  There's 5 volumes with dedup turned on - savings ranging from 28% to 49%, and the sizes range from 200GB to 2TB.
    I'm running the powershell command: 
    start-dedupjob -volume "I:" -type unoptimization -priority high
    However, after a day, the result of "get-dedupjob" is that the unoptimization type is still running, but shows 0% progress.  Shouldn't there be some indications that the job is running?  I know the job is going to take awhile, but even
    the small volumes aren't moving.
    I also ran "get-dedupstats" and compared the before and after "optimizedfiles" category, and that is unchanged as well.
    Any ideas?
    (Sidenote: prior to running the command, I put the volume folders in the "excluded" folder list so they wouldn't reoptimize after the job was done.  Is this right?)
    There's no error in the event logs, and in the Dedup log files, it shows that the job has started, and requested memory space, but nothing after that.

    I'm seeing this same behavior when running an Unoptimization job - 0% progress made, even though Get-DedupeStatus shows a decreasing number of OptimizedFiles and a decrease in SavedSapce. I thought the job was "broken" when I saw it at first, so
    I stopped the job and ran it again. It seems like Get-DedupStatus is a half decent method of tracking progress, but this job seems to take a VERY long time to complete.
    Just posting this to stay informed of any future updates on the issue, if any... not trying to piggy-back on the post requesting help.

  • More Major Issues with ZFS + Dedup

    I'm having more problems - this time, very, very serious ones, with ZFS and deduplication. Deduplication is basically making my iSCSI targets completely inaccessible to the clients trying to access them over COMSTAR. I have two commands right now that are completely hung:
    1) zfs destroy pool/volume
    2) zfs set dedup=off pool
    The first command I started four hours ago, and it has barely removed 10G of the 50G that were allocated to that volume. It also seems to periodically cause the ZFS system to stop responding to any other I/O requests, which in turn causes major issues on my iSCSI clients. I cannot kill or pause the destroy command, and I've tried renicing it, to no avail. If anyone has any hints or suggestions on what I can do to overcome this issue, I'd very much appreciate that. I'm open to suggestions that will kill the destroy command, or will at least reprioritize it such that other I/O requests have precedence over this destroy command.
    Thanks,
    Nick

    To add some more detail, I've been review iostat and zpool iostat for a couple of hours, and am seeing some very, very strange behavior. There seem to be three distinct patterns going on here.
    The first is extremely heavy writes. Using zpool iostat, I see write bandwidth in the 15MB/s range sustained for a few minutes. I'm guessing this is when ZFS is allowing normal access to volumes and when it is actually removing some of the data for the volume I tried to destroy. This only lasts for two to three minutes at a time before progressing to the next pattern.
    The second pattern is categorized by heavy, heavy read access - several thousand read operations per second, and several MB/s bandwidth. This also lasts for five or ten minutes before proceeding to the third pattern. During this time there is very little, if any write activity.
    The third and final pattern is categorized by absolutely no write activity (0s in both the write ops/sec and the write bandwidth columns, and very, very small read activity. By small ready activity, I mean 100-200 read ops per second, and 100-200K read bandwidth per second. This lasts for 30 to 40 minutes, and then the patter proceeds back to the first one.
    I have no idea what to make of this, and I'm out of my league in terms of ZFS tools to figure out what's going on. This is extremely frustrating because all of my iSCSI clients are essentially dead right now - this destroy command has completely taken over my ZFS storage, and it seems like all I can do is sit and wait for it to finish, which, as this rate, will be another 12 hours.
    Also, during this time, if I look at the plain iostat command, I see that the read ops for the physical disk and the actv are within normal ranges, as are asvc_t and %w. %b, however is pegged at 99-100%.
    Edited by: Nick on Jan 4, 2011 10:57 AM

  • The final dedup ratio doesn't match the target dedupratio in vdbench configuration file

    Hi,
    I would like to fill-up the volumes with predefined dedup ratio using vdbench dedupratio and dedupunit values introduced in vdbench 5.03
    I'm using 2 centos servers running vdbench50402 (also tried vdbench50403rc1). Below you can see the sample configuration file.
    With all the dedup values i receive constantly 1.1 dedup ratio on the storage array. What are the possible reasons for incorrect final dedup ratio  ??
    Thanks,
    Alex I.
    #********Vdbench configuration file*************************************
    dedupratio=5
    dedupunit=8k
    hd=default,vdbench=/vdbench50402,user=root,shell=vdbench
    hd=hostwg1,jvms=20,system=host-wg-1
    hd=hostwg2,jvms=20,system=host-wg-2
    #Define the test device size as "size=xxx"
    sd=default,size=2136G,openflags=o_direct
    sd=sd1hostwg1,lun=/dev/mapper/123456789010007a,host=hostwg1
    sd=sd2hostwg1,lun=/dev/mapper/123456789010007b,host=hostwg1
    sd=sd3hostwg1,lun=/dev/mapper/123456789010007c,host=hostwg1
    sd=sd4hostwg1,lun=/dev/mapper/123456789010007d,host=hostwg1
    sd=sd5hostwg1,lun=/dev/mapper/123456789010007e,host=hostwg1
    sd=sd6hostwg1,lun=/dev/mapper/123456789010007f,host=hostwg1
    sd=sd7hostwg1,lun=/dev/mapper/1234567890100080,host=hostwg1
    sd=sd8hostwg1,lun=/dev/mapper/1234567890100081,host=hostwg1
    sd=sd1hostwg2,lun=/dev/mapper/1234567890100082,host=hostwg2
    sd=sd2hostwg2,lun=/dev/mapper/1234567890100083,host=hostwg2
    sd=sd3hostwg2,lun=/dev/mapper/1234567890100084,host=hostwg2
    sd=sd4hostwg2,lun=/dev/mapper/1234567890100085,host=hostwg2
    sd=sd5hostwg2,lun=/dev/mapper/1234567890100086,host=hostwg2
    sd=sd6hostwg2,lun=/dev/mapper/1234567890100087,host=hostwg2
    sd=sd7hostwg2,lun=/dev/mapper/1234567890100088,host=hostwg2
    sd=sd8hostwg2,lun=/dev/mapper/1234567890100089,host=hostwg2
    wd=wd_fill,sd=sd*,xfersize=256k,rdpct=0
    rd=default
    rd=fill_pass1,wd=wd_fill,iorate=max,interval=1,elapsed=172800
    #********Vdbench configuration file*************************************

    'dedupunit=' is the key here.
    The Vdbench dedup logic is based around "storage recognizes duplicate data blocks in chunks of 8k".
    If your storage has a different 'dedup unit' Vdbench won't create the proper data patterns.
    An other issue may be the FILLING and reporting of your luns: if you have a 10tb lun but Vdbench only writes 1tb, then your dedup will only include 1TB worth of Vdbench written data patterns.
    Note: May I suggest in above parameter file to start using 'seekpct=eof'? Vdbench will then stop after the last block on all SDs has been written instead of your current 48 hours, which could result in rewriting the same blocks over and over again, because with seekpct=0 Vdbench just starts at the beginning again when it reaches the end.
    Of course, if 48 hours is not enough elapsed time Vdbench will terminate BEFORE the last block has been written.
    Hope this helps.
    Henk.

  • Manual replica creation of DeDup Volumes

    Hi,
    we are on the way to update a lot of our file Servers to Windows Server 2012.
    DeDup will be enabled on the volumes.
    We are going to to backup these voumes via DPM2012SP1.
    Now my question: Is it possible to create a Manual replica (for example with the help of USB external drives) for dedup enabled volumes or does the replica creation process have to be triggered from within DPM console so that all data will be transfered
    over the Network?
    Thanks in advance
    regards
    /bkpfast
    My postings are provided "AS IS" with no warranties and confer no rights

    Hi,
    Unfortumatly, you cannot perform a manual replica creation for a deduped volume in a deduped state, however, you can perform the following, but don't think it would provide you much benefit.
    1) On the protected server that has the dedup volume you want to protect, create a dummy folder in the root of the volume.
    2) On the DPM Server, protect the dedup volume, except uncheck the dummy folder.  Choose to make a manual replica.
    3) Copy the contents of the protected volume to the replica volume using any method you choose. 
    4) Run the mandatory consistency check to make sure the data is equivelent.
    5) On the DPM Server, modify the PG and now include the dummy folder - basically uncheck the drive letter and recheck it so DPM will protect the whole volume now.
    6) On the protected server, delete the dummy folder.
    7) Re-run a new consistency check, and DPM will now protect the volume in a dedup state.  This will result in only transfering deduped blocks, and leaving non-dedup blocks intact.
    Again, that may not buy you much and may be a waste of time - but give it whirl.
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread. Regards, Mike J. [MSFT]
    This posting is provided "AS IS" with no warranties, and confers no rights.

  • Secondary DPM (2012 R2) Issues: synchronising too much data from Primary DPM, persistant consistancy checks, and no dedup support!

    I have a Primary DPM server that is protecting various File servers and a Exchange DAG. Everything is working as it should on this server, but I am experiencing constant issues on the secondary DPM protecting these same data sources.
    On the primary, two Protection Groups in particular are set up to protect two different volumes on the same file server. Volume D:\ is 46TB in size, with a deduplicated file size of 22TB (actual 39TB) whilst Volume E:\ is 25 TB in size, with a deduplicated
    file size of 2.5TB (actual 5.5TB).
    Issue 1:
    As expected Volume D: was initially replicated to the secondary DPM server at its undeduplicated size ( 39TB), as was Volume E:\ with initial replication to the secondary of 5.5 TB.  So when is Microsoft going to support dedup on a secondary DPM ? It
    seems daft to support dedup on the Primary DPM which is always more likely to be close to the original datasources on the  LAN and not on the secondary DPM which is most likely to be placed offsite on a WAN !
    Issue 2:
    I have a big issue with the subsequent Synchronizations on the secondary server for E: which seems to transfer almost the full 5.5 TB every day ( I have it set to sync every 24hrs) - although the data is fairly static (i.e.unchanging) on that volume. On
    one occasion a Sync continued for over 48 hours for this volume and had transferred over 20GB (according to the DPM console) , until I manually cancelled the job - how is that possible on a volume with only 5.5 TB (UN-deduplicated) ?? What is going on here
    - has anyone any ideas?
    Issue 3:
    Another File server I am protecting on both the Primary & Secondary DPM server - always fails over to a consistency check on the secondary server - usually due to the fact it cannot access a particular file which results in an inconsistent replica .
    However the Sync (and subsequent restore point) on the primary DPM server from the original datasource is always completes without any issues. Again, has anyone any clues ?
    I do get the impression that the whole Secondary DPM thing is not quite robust enough. I can only assume that as the Primary seems to protect the original datasources ok, that the issue is with the secondary reading the information on the primary DPM.

    I tried changing the time of the synchronization, but that didn't help.
    Meanwhile, I was working on another unrelated case with Microsoft and so I didn't want to have a second case open at the same time. So I waited for some weeks with no change on this problem. Then, about a day or two before I was finally going to call Microsoft
    to open a case (months after the problem had started), the problem suddenly resolved itself, with no input from me! So I don't really know if it was time that fixed it eventually or what. Sorry I can't be of more help.
    Rob

  • 2012 R2 Indexing vs. DeDup?

    I found some threads on here about Windows Search not working on DeDup volumes? Most of the threads are pre-R2 timeframe so I am checking in to see if there is any updated info on if this was fixed in R2? It seems this is a critical feature if you are using
    a volume for redirected libraries?
    Brian Hoyt

    Hi,
    I get the reply from Deduplication user discussions team. As Windows Search Indexer skips files which carry reparse points, while de-duplicated files carry reparse points. So using Windows Search Service to index de-duplicated files is not supported on Windows
    server 2012 R2 also by now.
    Regards,
    Mandy
    If you have any feedback on our support, please click
    here .
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Server 2012 R2 - Dedup chunk store bigger than original data

    Hi there,
    I have a fresh install of a Windows Server 2012 R2 as fileserver with deduplication enabled on several volumes. One volume shows a strange behavior after a couple of weeks in productive environment: Initially I had a dedup saving rate of ~10% (volume contains
    mainly images), but after a few weeks, both UI and PowerShell report savings of 0 bytes.
    What is really bad about this is, that the original size of the un-deduplicated files is about 880 GB and my volume has filled up to nearly 1 TB (no, data has not increased that much over time, only a few gig). Watching the volume using TreeSize (JamSoft)
    I found out, that the dedup folder has grown to 990 GB!
    I tried to run optimization and garbage collection with no luck. The main folder is DFS replicated and (still) doing initial sync - this may change files, but dedup is configured to dedup files after 0 days.
    Any ideas?
    Regards,
    Marc

    Hi Marc,
    From the screenshot it seems that almost all files are put into Dedup folder.
    Deduplication supports DFSR so I cannot confirm if it is the cause. However as the initial replication is still running and dedup schedule is set as 0 days, I think we can first eliminate one of them to see if issue could be fixed.
    A. So please test to disable Deduplication until initial replication is done - disable Deduplication could help you get rid of the current situation and if the issue is DFSR related, after initial replication is finished, files will not be edited so frequently
    which should help on this issue.
    Note: In order to speed up the initial replication, you can copy files to target server as a pre-staging so that data will not need to be replicated through DFSR. And if files will not be change during replication, you can use the new function of Windows
    2012 R2 that to copy metadata to target server.  See:
    DFS Replication Initial Sync in Windows Server 2012 R2: Attack of the Clones
    http://blogs.technet.com/b/filecab/archive/2013/08/21/dfs-replication-initial-sync-in-windows-server-2012-r2-attack-of-the-clones.aspx
    B. You can also set Deduplication schedule back to 5 Days (the default setting) - You may need to redo an optimization to un-dedup all files (similar to disable dedup) first to get rid of current situation. As the schedule is set to 5 days, files will not
    be dedup immediately which should also help on this issue. 
    If you have any feedback on our support, please send to [email protected]

  • Dedup my table

    I have a row that shows playtimes for videos. Our app has a bug that allowed duplicate rows. Technically on the db they aren't duplicate because the id's are unique but they have the same asset_num and start_times. I need to weed out all the rows with duplicate asset_num and start_times BUT i need to keep one of the duplicates.
    I run this to get which rows have more than 1 row:
    select asset_id, start_time, count(*) from current_schedule group by asset_id, start_time order by count(*) desc
    Each row also has a unique id that i could use in the delete statment but i want to leave ONE of the rows after my delete.
    Has anyone ever dedup a table like this? I think i need a delete statement based on rowid but i haven't been able to get it to work yet. Any help would be greatly appreciated.

    this example might be of help.
    SQL> select * from employees;
    YEAR EM NAME       PO
    2001 02 Scott      91
    2001 02 Scott      01
    2001 02 Scott      07
    2001 03 Tom        81
    2001 03 Tom        84
    2001 03 Tom        87
    6 rows selected.
    SQL> select year, empcode, name, position,
      2         row_number() over (partition by year, empcode, name
      3                            order by year, empcode, name, position) as rn
      4    from employees;
    YEAR EM NAME       PO         RN
    2001 02 Scott      01          1
    2001 02 Scott      07          2
    2001 02 Scott      91          3
    2001 03 Tom        81          1
    2001 03 Tom        84          2
    2001 03 Tom        87          3
    6 rows selected.
    SQL> Select year, empcode, name, position
      2    From (Select year, empcode, name, position,
      3                 row_number() over (partition by year, empcode, name
      4                                    order by year, empcode, name, position) as rn
      5            From employees) emp
      6   Where rn = 1;
    YEAR EM NAME       PO
    2001 02 Scott      01
    2001 03 Tom        81
    SQL> Delete From employees
      2   Where rowid in (Select emp.rid
      3                     From (Select year, empcode, name, position,
      4                                  rowid as rid,
      5                                  row_number() over (partition by year, empcode, name
      6                                            order by year, empcode, name, position) as rn
      7                             From employees) emp
      8                    Where emp.rn > 1);
    4 rows deleted.
    SQL> select * from employees;
    YEAR EM NAME       PO
    2001 02 Scott      01
    2001 03 Tom        81
    SQL>

  • ZFS Dedup question

    Hello All,
    I have been playing around with ZFS dedup in Open Solaris.
    I would like to know how does ZFS store the dedup table. I know it is usually in memory, but it must leave a copy on disk. How is this table protected? Are their multiple copies like Uber block?
    Thanks
    --Pete                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

    Hello All,
    I have been playing around with ZFS dedup in Open Solaris.
    I would like to know how does ZFS store the dedup table. I know it is usually in memory, but it must leave a copy on disk. How is this table protected? Are their multiple copies like Uber block?
    Thanks
    --Pete                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • DeDupe and logical Questions

    Can Some one explain me i ran dedupe and it shows 100 completed and that time it saved 21 GB after three days it saved 1 TB on same data how it is possible ?

    Hi, 
    I get the reply below from Deduplication user discussions team.
    There are three possible explanations for this.
    1. Long running optimization job – the statistics for the job are saved periodically as the job runs, so you can watch these values increase during optimization execution
    2. Open files – Data Deduplication in WS2012 skips open files during optimization.  When the files are later closed and the optimization job runs again, it would optimize those files and the SavingsRate would go up.
    3. Normal file churn – after the first deduplication job ran, some non-deduplicated data on the volume was deleted.
    More information on #3… 
    There are two measures of deduplication rate, SavingsRate and OptimizedFilesSavingsRate.
    - SavingsRate is based on the total amount of data on the volume.  You can see this value decrease/increase as you add/remove data on the volume (e.g. copy a bunch of files to the volume after the optimization job has completed).
    - OptimizedFilesSavingsRate is based on the total amount of deduplicated data on the volume.  You would see this value stay the same as you add/remove data on the volume (until another optimization or GC job runs).
    Regards, 
    Mandy
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • RMan Backup - Dedup ratio

    Hello
    We make every night a rman backup. This backup will be safed on the netbackup system. but we have the problem that our deduplication ratio is very bad. is there any commands in rman, that the dedup ratio will be better?
    Thanks...
    Best regards...
    street

    try this:
    CONFIGURE CONTROLFILE AUTOBACKUP OFF;
    CONFIGURE BACKUP OPTIMIZATION OFF;
    CONFIGURE RETENTION POLICY TO REDUNDANCY 2;
    CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/oraback/db/backup/temp/wdkp_SNAP.ctl';
    ALLOCATE CHANNEL DISK_1 DEVICE TYPE DISK FORMAT '/oraback/db/backup/%T_%s_%t_%p.dbf';
    ALLOCATE CHANNEL DISK_2 DEVICE TYPE DISK FORMAT '/oraback/db/backup/%T_%s_%t_%p.dbf';
    ALLOCATE CHANNEL DISK_3 DEVICE TYPE DISK FORMAT '/oraback/db/backup/%T_%s_%t_%p.dbf';
    ALLOCATE CHANNEL DISK_4 DEVICE TYPE DISK FORMAT '/oraback/db/backup/%T_%s_%t_%p.dbf';
    BACKUP INCREMENTAL LEVEL 0 DATABASE;
    SQL 'BEGIN DBMS_LOGMNR_D.BUILD(OPTIONS=^>DBMS_LOGMNR_D.STORE_IN_REDO_LOGS); END;';
    SQL 'CREATE RESTORE POINT WDKP_BACKUP';
    SQL 'ALTER SYSTEM ARCHIVE LOG CURRENT';
    BACKUP ARCHIVELOG ALL;
    BACKUP CURRENT CONTROLFILE;
    RESTORE DATABASE VALIDATE;check the number of available cpus on your server, to determine the maximum useful number of channels.
    I added 3 (now 4 total). change that according to your requirements.
    br,
    pinela

  • Dedup performance getting worse and worse

    Hi Everyone
    I am running Windows 2012 Server and using it for storing VEEAM backups of our VMware environment, and well as a lot of SQL database backups
    Deduplication has been running well for the past couple of months (about 35Tb saved), but the performance seems to be getting worse and worse and it is now struggling to keep up with the rate at which new data is added
    Checking task and resource manager it doesn't appear that the fdsmhost.exe is really doing much work at all. I see long periods of complete inactivity with no disk access which is not really what I would expect. The files deduped and dedup savings are not
    changing either
    Does anyone have any good information about factors affecting dedup performance. I've tried manually starting the dedup job with the priority and memory usage parameters to force it to high priority and give it a decent amount of memory but still no joy
    We have recently added some more VMs to our setup so some of the backup files are very large (around 1.6Tb). is it likely to just be struggling with these large files?
    Any clues or best practices?
    Thanks
    Harley

    1) I can share current stats but don't have anything historically before things started to slow down
    Volume                         : D:
    VolumeId                       :
    \\?\Volume{222fb
    StoreId                        : {4D7E4D3E-EC7F-4
    DataChunkCount                 : 79157302
    DataContainerCount             : 2188
    DataChunkAverageSize           : 27.09 KB
    DataChunkMedianSize            : 0 B
    DataStoreUncompactedFreespace  : 0 B
    StreamMapChunkCount            : 69269
    StreamMapContainerCount        : 564
    StreamMapAverageDataChunkCount :
    StreamMapMedianDataChunkCount  :
    StreamMapMaxDataChunkCount     :
    HotspotChunkCount              : 758605
    HotspotContainerCount          : 21
    HotspotMedianReferenceCount    :
    CorruptionLogEntryCount        : 0
    TotalChunkStoreSize            : 2.07 TB
    2) There is no integration between VEEAM and MS as such. VEEAM just writes the backup files to a share and then Windows goes and deduplicates these at a later date. This was working fine for a couple of months but now seems to basically be doing nothing.
    This is causing me to run out of space fast as a weekly full backup is around 1.6Tb and the nightly incrementals are about 200Gb each. When this was working they would all get deduplicated back close to nothing freeing up nearly all of the space. Obviously
    this makes a massive difference. VEEAM itself is still working, but no deduplication seems to be happening
    3) I probably wasn't very clear, but it is only SQL backups being stored on here and not any actual live databases. As the nightly backups don't change very much and are never accessed except in a restore they should be perfect candidates for deduplication
    and do actually give very good deduplication rates.
    I do notice that one of the items on your list is files approaching or greater than 1Tb
    so maybe that is the problem as the weekly backup is one 1.6Tb file
    Maybe i'll try to split the backups up slightly and see if that makes any difference. Thinking about it  ..... before we adding the recent batch of servers to VMware the backup was around the 1Tb mark and it seems to have died around the jump to 1.6Tb

Maybe you are looking for

  • New changes are not getting reflected in the bam reports.

    Hi I have created a BPEL process and it polls the data from a table and populate an object created in Bam.I have accomplished the above task by createing two database adaptors.One is on 10g database and another one is on Bam database.I have created a

  • EDI to IDOC

    Hi, IDoc:Ordersp.Orders05 On d receiver side i'v only one field i.e.E1EDK14-QUALF. From d source side i'v 4 values i.e."006","007","008","009". how can i assign?

  • Closing msn account

    hi have a question about to change your account from a microsoft account to iCloud account get all your apps with you to a new iCloud i have 200 gb storage how do i do that is the a formular i have only problem using a microsoft account  is stopping

  • Where to get 2nd hand mobo for Satellite A300-21H

    Hopefully someone will be able to help. Where could I purchase a motherboard 2nd hand if possible for my wifes A300 21H. The system is only 18 months old very annoying it's lasted such a short time. I have had the system checked and they think it's t

  • BSEG Performans

    Hi ABAPers, I have a problem with BSEG table performance. When i run this select :   select bukrs belnr gjahr shkzg dmbtr hkont INTO TABLE itab_bseg_II from BSEG                      where GJAHR EQ P_RYEAR                        AND HKONT EQ X011Z-BI