Copy existing security group to new

Hi all,
Need some help to do the following:
I have created a new security group and would like to import users from existing Distribution group (but leave this DL as is).
thats it and thanks!
Please mark as helpful if you find my contribution useful or as an answer if it does answer your question. That will encourage me - and others - to take time out to help you. Thank you! Off2work

Hi,
You can try something like this:
Add-ADGroupMember -Identity 'New Group Name' -Members (Get-ADGroupMember -Identity 'Old Group Name') -WhatIf
http://technet.microsoft.com/en-us/library/ee617210.aspx
Don't retire TechNet! -
(Don't give up yet - 13,085+ strong and growing)

Similar Messages

  • Exchange 2013 Mail Enable Existing Security Groups

    Hello,
    I can't seem to find how to mail enable an existing Security Group in Exchange 2013.  Does anyone know how to do this?  I have created them as Universal Security Groups in Active Directory.  I see that if you create them from the Exchange
    Admin Center, it will work, but I have a ton of groups with very complicated memberships that exist in AD and I would prefer not to delete them, recreate them, and adjust membership.
    I looked for a cmdlet that would let me do this, but I can't seem to find one.
    Does anyone know how to Mail Enable an Existing Group from Exchange 2013?
    Thanks

    Hello Stewart,
    If these groups are universal security groups, you can just follow Martina's suggestion to do that.
    Thanks,
    Evan Liu
    TechNet Subscriber Support in forum
    If you have any feedback on our support, please contact
    [email protected]
    Evan Liu
    TechNet Community Support

  • How to copy existing query report into new query report in SQ00

    Hi Experts,
    Hi Experts,
    I want to add fields "company code" "'region" to existing  query report AQZZ/SAPQUERY/FKF1============
    (list of vendor address) for this i done as following:
    1.In SQ01  go to "EDIT->other user group" and i selected user group as /SAPQUERY/FK
    2.I typed F1 in query field and click change button
    3.I clicked next screen button and entered into "change query f1: select fields screen".here i clicked "basic list" button and searched company code checkbox and saved it as result company code is appearing in the standard report"AQZZ/SAPQUERY/FKF1============"
    but unfortunately there is no region field(LFA1-REGIO) for this i think i should copy the existing  query report  into new query report(Ex:Z_LIST_OF_VEND) which should be 14 characters.please tell me briefly how to do this because this is first time i am using SQ00.
    one more issue is when i selected "edit-otheruser group" and choosing /SAPQUERY/FK  i  am getting only infoset "/SAPQUERY/FIKD" but i should need Info set: "/SAPQUERY/FIDD" please tell me how to add the previous one into user group.i think if i got /SAPQUERY/FIDD into usergroup  /SAPQUERY/FK i can add region also into Query report as i mentioned above by going SQ01 ...............................
    please help regarding this which should be very beneficiary to my carrier.
    Regards,
    naresh

    Hi Experts ,
    I solved issue by changing infoset in SQ02 by means of assigning field to field group and changed the query in SQ00.
    Regards,
    naresh.

  • How to copy  existing  row  value into new row  with a trigger. Same table

    Oracle guru,
    I am looking for a before or after trigger statement that will copy existing values inserted in the previous row into columns A & B. Then insert those values in a new row into column A & B if null? Same table. Hopefully my question is clear enough.
    -Oracle 10g express
    -I have an existing " before insert trigger" that insert id and timestamps when a new row is created.
    -Table is composed of column like id,timestamps,A,B and more.
    Thanks in advance
    Pierre

    957911 wrote:
    Oracle guru,
    I am looking for a before or after trigger statement that will copy existing values inserted in the previous row into columns A & B. Then insert those values in a new row into column A & B if null? Same table. Hopefully my question is clear enough.
    -Oracle 10g express
    -I have an existing " before insert trigger" that insert id and timestamps when a new row is created.
    -Table is composed of column like id,timestamps,A,B and more.
    Thanks in advance
    PierreI will call it a very Wrong design.
    It is a wrong Table Design. You are duplicating the data in table and not complying with the Database Normalization rules.
    How about Verifying if Column A & B are NULL before inserting and inserting another row and avoiding it in Triggers?
    If you are bent to achieve this, below code might be helpful. However, I would never go with this approach. If you would care about explaining the reason for going ahead with such a data model, people could suggest better alternative that might conform with Normalization rules.
    create or replace trigger trg_test_table
    after insert on test_table
    for each row
    declare
      pragma autonomous_transaction;
    begin
      if :new.col_a is null and :new.col_b is null then
        insert into test_table
        select 2, systimestamp, col_a, col_b
          from test_table
         where pk_col = (select max(pk_col) from test_table b where b.pk_col < :new.pk_col);
      end if;
      commit;
    end trg_test_table;Read SQL and PL/SQL FAQ and post the mentioned details.
    Do not forget to mention output from
    select * from v$version;

  • Where to copy existing XCM file for new b2c project?

    Hi,
    I have created one "b2c_myproject" application by using build tool. I have one b2c application running. I want to copy it's XCM setting to my new project.
    I do not want to do setting by seeing each value in XCM. Any one has a Idea from where I can copy existing XCM files?
    Thanks.
    Ashish Patel.

    Hi Ravi,
    As per your instruction I copied XCM folder into desired location. But when I am accessing that xcm page like
    "http://localhost:50000/b2c_myproject/xcm/admin/init.do"
    But I can not see default setting which I have made to my original b2c applicaiton.
    there are no XCM configuration under customer section.
    pl. let me know where I am making mistake.
    What I want actully, I want to see my basic xcm setting in my b2c_myproject so do not have to do basic xcm setting.
    Thanks for your reply and helping me.
    Ashish Patel.

  • BDLS t-code Doubt after copying existing ECC to the new ECC

    Hi,
    We are creating a new ECC client by copying other already exist ECC Client with SAP_ALL Profile. Both are at  ECC 6.0.  Source and Destination ECC have Development and Demo Server respectively.
    Old ECC Client : ECDCLNT801  
    New ECC Client :  PHDCLNT902 (Created by copying ECDCLNT801 with SAP_ALL Profile)
    After Creation of Client , We run two BDLS  t-code in  PHDCLNT902.
    Source  : ECDCLNT801 and Target : PHDCLNT902   (On selecting  Client Dependent and Independent Tables)
    After done this I am getting  the same GUID no in CRMPRLS table for  both ECCs.
    Is it fine having same GUID no. for PHDCLNT902 and ECDCLNT801 in CRMPRLS table as mentioned PHD 902 is the copy of ECD 801 with SAP_ALL profile ,  so the GUID should have different or the same?
    Or have to delete that entry from PHD 902 and again need to re run *BDLS * to get the new GUID at PHD 902?
    Kindly correct me, will give points.
    Regards,
    Pawan Keshwani

    Hello Pawan,
    Have a look at OSS notes below:
    588701 - Change of the logical system name in R/3 Backend system
    765018 - Problems with logical system during data exchange.
    It will solve your problem.
    Regards.
    Laurent.

  • Copy existing Webdynpro Projects to New

    Hi Experts,
    I have three webdynpro projects in NWDI .
    How can i copy them into new workspace without affecting the existing one .
    The requirement is that after copying them into new workspace we have to do modification in views .
    Do we need a new JDI Server destination for that ?
    Please help .
    Thanks a lot .

    hai jain,
    1.just right click on the project and select properties.
    2.get the location of your project(workspace)
    3.navigate to that location and find your project .
    4.copy the project and paste it in some other location (eg d:/)
    5.now you can make changes in the project in nwds.
    If anything goes wrong while doing modifications you can get back the older project(copied by you)
    steps for that.
    1.delete the project which is in nwds.
    2.file->.import>select project from existing workspace->browse and choose the location where you copied the project(d:/)
    3.finish.
    now you ll be able to find the project which you had before modifications.
    any issues let me know.
    Thanks n Regards
    Sharanya.R

  • Powershell script for security groups and users for multiple share folders

    Hi scripting team,
    I need your help with powershell script for the below queries 
    1. List out the security groups for more than one server share path and output it to a file ( csv ) 
    For eg.
    If the are are two share paths 
    \\servername\foldermain\folder1
    \\servername\foldermain\folder2
    So I needs the list of security groups for each share path
    And the output needs to be under each any every path.
    2. Grab the users belongs to main security groups and it nested groups for more than one security group and listed the users under each and every group. No need to display nested groups. Just users belongs to main group and users under nested.
    Your teams help is much appreciated 
    Thank you.
    Thilochana kumararatne

    Hi Braham,
    Thanks for your quick reply.
    Are we able to do this on two stage method
    1. grab the security groups from the share paths
    if can grab the share path from a separate txt file than copying it to the <your path> location
    so i can modify the txt file
    once run the script
    if can the output like below to a CSV file
    \\servername\foldermain\folder1group 1group 2group 3\\servername\foldermain\folder2group 1group 2group 3then i know which groups belongs to which share paththen i can remove the duplicate groups and keep the common groups to grab the users belongs to itso with the second script same as the first copy the security groups to a txt file and the out put as below.what I needs is the users full name and the samaccount name ( user id )group 1user1user2user3
    group 2user1user2user3looking forward your help on thisThank you.Thilo

  • File Server Migration - For ORG A Forest to ORG B Forest ( Need to create and Map Security Group automatically on new Migrated Folders - Please Help

    I have two forest With Trust works Fine .
    I have file server in ORG – A ( Forest ) with 2003 R2 Standard
    I have a File server in ORG  - B ( Forest ) With Windows server 2012 ( New Server for Migration )
    I have 1000 + folders with each different permission sets on ORG-A. We are using Security groups for providing permission on the share Folders on ORG A
    I need to Migrate  all the folders from ORG – A to ORG – B.
    I am looking for an automated method of creating Security Groups on AD during the Migration, Once the Migration is Done, I can add the required users to the security groups manually.
    Example.
    Folder 1 on ORG – A has Security Group Called SEC-FOLDER1-ORGA
    I need an automated method of Copying the files to ORG – B and Creating a new security Groups on ORG –B Forest with the same permission on parent and child Folders. I shall Add the users manually to the Group.
    Output Looks Like
    Folder 1 on ORG – B has Permission called SEC-FOLDER1-ORGB ( New Security Group )
    Also I need a summarized report of security Group Mapping, Example – Which security Group on ORGA is mapped with Security Group Of ORGB

    Hi,
    I think you can try ADMT to migrate your user group to target domain/forest first. Once user groups are migrated, you can use Robocopy to copy files with permission - that permission will continue be recognized in new domain as you migrated already. 
    Migrate Universal Groups
    http://technet.microsoft.com/en-us/library/cc974367(v=ws.10).aspx
    If you have any feedback on our support, please send to [email protected]

  • How to export "Managed by" field of Distribution and Security groups and import with new values? (Exchange 2010, AD 2003)

    My Active Directory environment is 2003 functional level and we have Exchange 2010.
    I am trying to find out the best way to do a mass edit for the "Managed by" values of our security and distribution groups.
    I know we can export the "managed by" field by csvde but I am not sure this is the correct way to do it. Also in the case that there are multiple users assigned to be managing a distribution group it only shows one value. Also powershell from Exchange
    2010 can be used with "get-distribution" but as our AD environment is 2003 is this correct also?
    Finally once the data is exported to csv can it be edited to then reimport and udpate the existing group managed by fields with new values?
    Not really sure that the best way to go about this is.
    Summary - We have 2003 AD with Exchange 2010 and I am trying to export a list of all our Distribution/Security groups showing the group name and managedby values so we can edit and update the
    existing managedby values with new ones. In some cases we have multiple users as the owners.
    Appreciate any advice on how this can be best achieved. Thank you.

    Hi,
    We can use the following command in Exchange 2010 to export "Managed by" field of Distribution and Security groups:
    Get-DistributionGroup | Select-object Name,@{label="ManagedBy";expression={[string]::join(“;”,$_.managedby)}},Primarysmtpaddress | Export-Csv
    C:\export.csv
    After you changed the Managed by field in export.csv and saved it as a new file named import.csv, we can run the following command to set with new value:
    Import-Csv C:\import.csv | Foreach-Object{ Set-DistributionGroup –Identity $_.Name –ManagedBy $_.ManagedBy}
    Hope it works.
    Thanks,
    Winnie Liang
    TechNet Community Support

  • How to export "Managed by" field of Distribution and Security groups and import with new values?

    My Active Directory environment is 2003 functional level and we have Exchange 2010.
    I am trying to find out the best way to do a mass edit for the "Managed by" values of our security and distribution groups.
    I know we can export the "managed by" field by csvde but I am not sure this is the correct way to do it. Also in the case that there are multiple users assigned to be managing a distribution group it only shows one value. Also powershell from Exchange
    2010 can be used with "get-distribution" but as our AD envronment is 2003 is this correct also?
    Finally once the data is exported to csv can it be edited to then reimport and udpate the existing group managed by fields with new values?
    Not really sure that the best way to go about this is.
    Summary - We have 2003 AD with Exchange 2010 and I am trying to export a list of all our Distribution/Security groups showing the group name and managedby values so we can edit and update the
    existing managedby values with new ones.
    Appreciate any advice on how this can be best achieved. Thank you.

    Hi Barkley,
    You can also refer to Official Scripting Guys forum to get a script solution:
    http://social.technet.microsoft.com/Forums/scriptcenter/en-US/home?forum=ITCG&filter=alltypes&sort=lastpostdesc
    Best Regards,
    Amy Wang

  • I want to install an SSD as my OS drive. Can I do a copy of my existing drive straight the new drive?

    I wish to install an SSD as my OS drive. I seem to recall in my travels that it is possible to copy my existing OSD, install the SSD, Copy old drive info' to new drive and then set that to be the boot drive.
    Any directions to where I can find out how to do this would be greatly apprecciated. Providing this can be done.

    TRIM Part 2
    Trim on SSD Drives
    When I first wrote this article/thread, which was for my own benefit and was eventually turned into this thread, as it is today, trim was not available on ssd drives, when it did arrive, there were mixed idea’s on how it worked, a lot that was incorrect at the time, a lot of new users do not understand the use of Trim even today, explanations tend to very technical as trim is a complicated issue at the best of times. I found a simple explanation for the implementation of trim in late 2009, I came across it again the other day, it still holds true today so here’s the link, ( I know you are going to point me to AnandTech explanations, there are links in this thread, if your a technically minded person, then AnandTech’s articles may be a better option ) this simpler explanation is still active in late 2011. As you can see if you carry on reading and don’t turn off, how complicated it can be for a non-technical user of ssd drives. So here’s my explanation of trim. Remember this a generalization of most ssd drives, it’s a lot easier when your dealing with your own ssd drive.
    Trim and it’s association with GC ( garbage collection ) varies depending on how the controller’s "GC" handles the use of Trim ( Win7 ). Trim is activated, by deleting files in the OS ( Win7 ), it doesn’t actually trim the ssd drive, but marks the block/file header’s with a 1 making that block/file available for being re-written or written over. Trimming is actually done by the GC and will only work if the block doesn’t contain other files that haven’t been marked deleted, but this again depends on how GC is implemented, the GC can wait till the block is full, then move undeleted files to another block allowing for the block to be trimmed ( GC ), in doing this, the GC uses a lot more write amplification than it does if Trim in win7 has previously marked these blocks/files for deletion, flash memory can only be ‘written over’ if all the information in that block as been marked with 1, this allows the whole block to be re-written. GC can only erase/Trim a full block, not individual files or pages.
    What trim does is mark these blocks/files etc and make it simpler for the inbuilt GC ( garbage collection ) to recognize the blocks that are available for further use, on some controller’s this will not necessarily happen immediately, it depends on whether the controller as been designed for “idle” use, “on the fly” use or “stand by use”, GC/Trim can be brought into use in many different ways, having the computer sitting with the bios open, having the computer idling at log in, placing the computer in stand by mode, deleting files or just simple idling the computer overnight. It’s a matter of finding out how the controller in your particular ssd handles the garbage collection. You will find the most efficient way is by experimentation or by other members passing on there particular way of doing it.
    Low level formatting used on conventional HDD drives writes mainly 0’s to every cell on a SSD drive, it’s the opposite to how flash memory actually works, if you low level format with Win7 or any software that writes 0’s or 0’s and 1’s to the individual cells, you are not necessarily cleaning the ssd drive completely, ( hence the need for secure erase software ), you can actually make the performance worse.
    If you use software designed for writing 1’s to each cell, like you would if it was designed for ssd drives, this will “clean” the drive and is a good thing to do if you are selling the drive, or as a last resort, if you are having problems with your ssd drive. The downside of using this type of erasure, is that it not only takes a long time it uses high write amplification and if used regularly can reduce the flash cells life expectancy considerably, these type’s of deletions bypass most controllers durawrite capabilities ( the way controllers extend the life expectancy of the ssd’s individual MLC cells ), Durawrite ( Sandforce ), other controllers have this technology in some form or other, it increases MLC life expectancy by between 5 and up to 30 times depending on the design of the controller.
    Basically a command line program like dispart and dispar will secure erase a ssd drive either by writing to the individual ( “Cleanall“ command ) cells or simply marking the blocks to be deleted with a 1 ( “Clean” command ), you need to use the latter which takes only seconds and will return the majority of ssd's to a new state, without impacting too much on write amplification.
    That’s how I see the use of GC and Trim in win7 today ( Nov 2011 ). Note! Most ssd software, tool boxes etc use the inbuilt win7 diskpart commands, to make it easier than using the command line. There is a explanation for the use of diskpart and can be found HERE, but NOTE! It’s written with conventional hard drives in mind, not ssd drives.
    In the case of most Toolboxes provided by manufacturer’s, the OCZ Toolbox is a typical example, they are incompatible with Intel’s RST driver, you would need to use Diskpart from the command line. Also Toolboxes will in most cases, not secure erase a ssd with a OS on it or if the ssd drive is in use as the OS drive eg. ( “C:/” partition ). If you want to secure erase an OS drive you need to delete all the partitions on the drive including any hidden partitions, there’s an excellent tutorial on the Intel toolbox on LesT’s website, TheSSDReview here’s the Link.
    You will have to use Diskpart or Diskpar from the Dos prompt, you can’t be in windows with the ssd drive you intend to erase. This is mainly if the toolboxes fail to work and deleting the partitions refuses to solve the problem.
    I’m sure there are exceptions to what I have written and easier ways of explaining trim or secure erasing some types of ssd drives. All I ask is you don’t isolate passages out of context, please read the whole article, before you tell me I’m incorrect. There are a lot of more informative people out there than me on this subject, so I’m open to criticism on the subject. I want to impart only the correct facts on this thread.
    Trim and the IDE issue The Intel IDE drivers after Vista sp2 are fully compatible with the trim command , but for trim to pass through this command, the ssd controller itself has also got to be compatible with IDE mode, eg. Intel drives with the Intel controllers are ( according to Intel ) fully compatible. The Crucial M4 appears not to be, other controllers optimized for AHCI may also not be compatible. I can’t comment on the Intel 510 as I’ve only ever used them in AHCI mode.
    Wearlevelling Here’s an explanation that’s not too complicated, its from StorageSearch.com, here’s the LINK.
    Overprovisioning Also from StorageSearch.com, a simple explanation for the need for overprovisioning, same link as above. This actual link covers a number of technologies used by the ssd controller in ssd drives. Overprovisioning improves write performance, if the ssd is used in a high write situation, increasing the overprovisioning will improve performance and write endurance, in a high read situation, too much can hinder performance, in a os situation the 7% supplied on client drives, in most cases is probably adequate depending on it’s use, if there’s a lot of writing done to the drive daily, reducing the partition size, which will increase overprovisioning, by a small amount may improve performance.
    http://forums.extremeoverclocking.com/showpost.php?p=3643482&postcount=1
    Trimming SSD Performance Degradation
    Thursday, October 14, 2010
    Todays solid state drives are worlds apart from those of just 3 years ago, however they are not yet perfect.  Performance degradation can still be observed through ‘seasoning’ of the SSD as well as filling it to capacity.  SSD manufacturers have been successful in combating the effects of seasoning but performance degradation when an SSD is filled to capacity seems to be just a bit more difficult.
    Typical testing of most drives, through use of random data, will result in an observable performance drop which may start as soon as the SSD is filled past the 70% mark.  This article will describe the common characteristics of SSDs followed by a simple method to ensure that maximum performance is sustained with the drive.
    SEASONING
    Much has been said with respect to performance degradation as a result of the ssd becoming ‘seasoned’ over time.  By ‘seasoned’, we mean that the drive will eventually use up all of its empty blocks of NAND, or memory and, without TRIM, the process of writing to a drive actually becomes that of reading the block of data, understanding that it is invalid, erasing and then writing rather than simply writing to a clean block.  Performance is greater when writing to ‘clean’ memory vice memory which has previously been used and contains invalid data that has not been cleared.  The root cause of degradation is that when a non-TRIM ssd is told to delete data, it actually only marks the area as clear which leaves the invalid data intact and tricks the ssd into believing that the NAND flash is available.
    Data on a SSD cannot simply be over-written as it is done on a hard drive and this gets a bit more complicated when we erase information and the block that it is located on also contains valid information that we don’t want deleted.  The process then becomes read data, recognize the valid information, move it to another clean block, erase the present block and write.  Manufacturers have tried to combat this issue of performance degradation by creating 3 solutions to the problem which are wear leveling, TRIM and ITGC (or Garbage Collection).
    Wear leveling
    Wear leveling is the process of the ssd understanding how many times each cell of memory has been written to and then ensuring that all are all written to evenly.  After all, the life span of the ssd is dependent on the total number of writes that are written to and this has been coined as ‘write endurance’.  Unlike the hard drive which stores information in a static location, the SSD will move information around on a continuous basis without your knowledge to ensure that all cells wear evenly, thus affording a longer lifespan for the ssd.  By also doing this, the drive can ensure that only the valid information is used, leaving blocks to be cleaned up by TRIM or ITGC, again without the knowledge of the user.
    ITGC/GC  (Idle Time Garbage Collection)
    Garbage Collection (GC) is the process by which the SSD recognizes, in idle time, which cells are valid and which are not valid (or deleted) on the drive.  It then clears the blocks of the invalid data to maintain the speed of writing to ‘clean’ pages or blocks during normal operation.  GC was initially shown to be a last resort if TRIM was not available, however, recent releases are showing new methods to be very aggressive and results equal to that of TRIM are being observed.  This is a huge benefit to those using RAID systems where Garbage Collection is accomplished as TRIM is not an option.
    The SSD Review was able to discuss GC and TRIM with Crucial as it pertains to their SATA3 releases as it has been observed that their RealSSD C300 SATA3 drives do not appear to show any performance degradation over extended use.  Crucial confirmed that they had to consider that TRIM would not pass through the present release of SATA3 drivers which helped recognize that very aggressive GC would be necessary for the C300 SATA 3 SSDs success.  The subsequent result was that many forum threads were created by avid users who were questioning whether TRIM was, in fact, working in their SSDs as no performance degradation was seen even in the toughest of test beds.  To dispel a common belief, it is not the Marvell processor of the Crucial RealSSD that prevents TRIM from being passed, but rather, that of the hardware and drivers of SATA3 capable motherboards.  All Crucial SSDs are fully capable of passing TRIM direction to the OS.
    TRIM
    TRIM occurs when the ssd clears blocks of invalid data.  When you delete a file, the operating system will only mark the area of the file as free in order to trick the system into believing the space is available. Invalid data is still present in that location.  Its like ripping out a Table of Contents from a book.  Without this, one would not know what, if anything, is contained on the following pages.  TRIM follows the process of marking the area as free by clearing the invalid data from the drive.  Without this, the process of reading, identifying invalid data, deleting or moving and clearing the block before writing can actually result in performance 4 times slower than it would have normally been as a new drive.
    In recently speaking with Kent Smith, Sr. Director of Product Marketing  for SandForce, he identified that there are many variables outside of the hardware that are responsible for users not seeing the benefits of TRIM, the first of which are drivers at the OS level which have to be working optimally in order for TRIM to function correctly.  Another example occurred with early Windows 7 users testing their newly installed drives and not seeing the benefits of TRIM.  Examination of these complaints revealed that users would have originally made the Windows 7 installation on hardware that did not support TRIM and then cloned to the SSD to which TRIM was supported but would not work because of the original configuration settings.  The same could be said of cloning an OS that originally had AHCI turned off followed by a clone to the SSD where TRIM was not being passed, simply because AHCI has to activated for TRIM to function.
    ENHANCE SSD OVER PROVISIONING MANUALLY
    In our conversation, we breached the topic of SSD capacity to Mr Smith to which he replied, “Are you trying to optimize performance or maximize capacity?” which reminded us that the main purpose of the consumers transition to SSD was to maximize their system performance.  Filling a drive to capacity will hinder TRIM and GC ability which will result in performance degradation. Many drives will start to display performance changes once filled to 70% capacity.  Testing has shown that the user can very simply add to the drive, especially if it is a 7% over provisioned drive, by reducing the size of the partition, the new unallocated space of which will automatically be picked up as over provisioning and benefit the SSD in many ways.  This idea has been tackled by Fusion IO who includes a utility within their products that allows the user complete control of the size of their over provisioning.
    OWC 120Gb SSD With 16x8Gb NAND Flash = 128Gb Total (7% OP)
    Over provisioning allows more data to be moved at one time which, not only enhances GC,  but also reduces write amplification to the drive.  Write amplification is a bit tricky of an explanation but it is the measure of how many bytes are actually written when requiring storage of a certain number of bytes.  A ratio of 1:1 would be ideal but not a reality and a typical result would be an actual size of 40kb written for a typical 4kb file.  In short, maximizing over provisioning and reducing write amplification increases the performance and lifespan of the drive.  Over provisioning also provides for remapping of blocks should the bad blocks be discovered during wear leveling, which unlike a hard drive, does not reduce the end user capacity of the drive. The replaced blocks simply come from the over provisioning.
    http://thessdreview.com/ssd-guides/optimization-guides/ssd-performance-loss-and- its-solution/
    Reducing the time GC takes
    Increasing the amount of freespace available after a GC (which increases the time it takes for performance to degrade after a GC)
    It lets the FTL have a wider selection of pages to choose from when it when it need a new page to write to, which means it has a better chance of finding low write count pages, increasing the lifespan of the drive
    Now, I want to be clear, a sufficiently clever GC on a drive that has enough reserved space might be able to do very well on its own, but ultimately what TRIM does is give a drive GC algorithm better information to work with, which of course makes the GC more effective. What I showed above was a super simple GC, real drive GCes take a lot more information into account. First off they have to deal with more than two blocks, and their data takes up more than a single page. They track data locality, they only run against blocks have hit certain threshold of invalid pages or have really bad data locality. There are a ton of research papers and patents on the various techniques they use. But they all have to follow certain rules based on on the environment they work in, hopefully this post makes some of those clear.
    http://www.devwhy.com/blog/2009/8/4/from-write-down-to-the-flash-chips.html

  • In ical I just added new calendars to a pre-existing calendar group, I can make events with these calendars, but not reminders, any suggestions?

    in ical I just added new calendars to a pre-existing calendar group, I can make events with these calendars, but not reminders, any suggestions?

    Hi,
    Lion has changed the way reminders (todos as was) work. They now seem to need to be in a seperate calendar.
    In iCal open the File menu and select New Reminder List... and select where to put it.
    Best wishes
    John M

  • Error while adding new security group in content server

    Hi,
    When i am trying to add new security group in UCM using User Admin applet i am getting following error:
    Event generated by user 'weblogic' at host 'vpunvfpctnsz-07.ad.infosys.com:16200'. Unable to execute service ADD_GROUP and function insertGroupRow.
    Unable to execute query 'IroleDefinition(INSERT INTO RoleDefinition (dGroupName, dRoleName, dPrivilege, dRoleDisplayName)
    values ('Test_111', 'admin', 0, ''))'. ORA-00001: unique constraint (DEV_OCS.PK_ROLEDEFINITION) violated
    java.sql.SQLIntegrityConstraintViolationException: ORA-00001: unique constraint (DEV_OCS.PK_ROLEDEFINITION) violated. [ Details ]
    An error has occurred. The stack trace below shows more information.
    !csUserEventMessage,weblogic,vpunvfpctnsz-07.ad.infosys.com:16200!$!csServiceDataException,ADD_GROUP,insertGroupRow!$!csDbUnableToExecuteQuery,IroleDefinition(INSERT INTO RoleDefinition (dGroupName\, dRoleName\, dPrivilege\, dRoleDisplayName)<br>          values ('Test_111'\, 'admin'\, 0\, ''))!$ORA-00001: unique constraint (DEV_OCS.PK_ROLEDEFINITION) violated<br>!syJavaExceptionWrapper,java.sql.SQLIntegrityConstraintViolationException: ORA-00001: unique constraint (DEV_OCS.PK_ROLEDEFINITION) violated<br>
    intradoc.common.ServiceException: !csServiceDataException,ADD_GROUP,insertGroupRow!$
    at intradoc.server.ServiceRequestImplementor.buildServiceException(ServiceRequestImplementor.java:2071)
    at intradoc.server.Service.buildServiceException(Service.java:2207)
    at intradoc.server.Service.createServiceExceptionEx(Service.java:2201)
    at intradoc.server.Service.createServiceException(Service.java:2196)
    at intradoc.server.ServiceRequestImplementor.handleActionException(ServiceRequestImplementor.java:1736)
    at intradoc.server.ServiceRequestImplementor.doAction(ServiceRequestImplementor.java:1691)
    at intradoc.server.Service.doAction(Service.java:476)
    at intradoc.server.ServiceRequestImplementor.doActions(ServiceRequestImplementor.java:1439)
    at intradoc.server.Service.doActions(Service.java:471)
    at intradoc.server.ServiceRequestImplementor.executeActions(ServiceRequestImplementor.java:1371)
    at intradoc.server.Service.executeActions(Service.java:457)
    at intradoc.server.ServiceRequestImplementor.doRequest(ServiceRequestImplementor.java:723)
    at intradoc.server.Service.doRequest(Service.java:1865)
    at intradoc.server.ServiceManager.processCommand(ServiceManager.java:435)
    at intradoc.server.IdcServerThread.processRequest(IdcServerThread.java:265)
    at intradoc.idcwls.IdcServletRequestUtils.doRequest(IdcServletRequestUtils.java:1332)
    at intradoc.idcwls.IdcServletRequestUtils.processFilterEvent(IdcServletRequestUtils.java:1678)
    at intradoc.idcwls.IdcIntegrateWrapper.processFilterEvent(IdcIntegrateWrapper.java:221)
    at sun.reflect.GeneratedMethodAccessor120.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at idcservlet.common.IdcMethodHolder.invokeMethod(IdcMethodHolder.java:87)
    at idcservlet.common.ClassHelperUtils.executeMethodEx(ClassHelperUtils.java:305)
    at idcservlet.common.ClassHelperUtils.executeMethodWithArgs(ClassHelperUtils.java:278)
    at idcservlet.ServletUtils.executeContentServerIntegrateMethodOnConfig(ServletUtils.java:1592)
    at idcservlet.IdcFilter.doFilter(IdcFilter.java:330)
    at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
    at oracle.security.jps.ee.http.JpsAbsFilter$1.run(JpsAbsFilter.java:94)
    at java.security.AccessController.doPrivileged(Native Method)
    at oracle.security.jps.util.JpsSubject.doAsPrivileged(JpsSubject.java:313)
    at oracle.security.jps.ee.util.JpsPlatformUtil.runJaasMode(JpsPlatformUtil.java:414)
    at oracle.security.jps.ee.http.JpsAbsFilter.doFilter(JpsAbsFilter.java:138)
    at oracle.security.jps.ee.http.JpsFilter.doFilter(JpsFilter.java:71)
    at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
    at oracle.dms.wls.DMSServletFilter.doFilter(DMSServletFilter.java:330)
    at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
    at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.doIt(WebAppServletContext.java:3684)
    at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3650)
    at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
    at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:121)
    at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2268)
    at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2174)
    at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1446)
    at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
    at weblogic.work.ExecuteThread.run(ExecuteThread.java:173)
    Caused by: intradoc.data.DataException: !csDbUnableToExecuteQuery,IroleDefinition(INSERT INTO RoleDefinition (dGroupName\, dRoleName\, dPrivilege\, dRoleDisplayName)
    *          values ('Test_111'\, 'admin'\, 0\, ''))!$ORA-00001: unique constraint (DEV_OCS.PK_ROLEDEFINITION) violated* at intradoc.jdbc.JdbcWorkspace.handleSQLException(JdbcWorkspace.java:2441)
    at intradoc.jdbc.JdbcWorkspace.execute(JdbcWorkspace.java:584)
    at intradoc.server.UserService.insertGroupRow(UserService.java:1201)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at intradoc.common.IdcMethodHolder.invokeMethod(IdcMethodHolder.java:86)
    at intradoc.common.ClassHelperUtils.executeMethodEx(ClassHelperUtils.java:310)
    at intradoc.common.ClassHelperUtils.executeMethod(ClassHelperUtils.java:295)
    at intradoc.server.Service.doCodeEx(Service.java:549)
    at intradoc.server.Service.doCode(Service.java:504)
    at intradoc.server.ServiceRequestImplementor.doAction(ServiceRequestImplementor.java:1622)
    ... 39 more
    Caused by: java.sql.SQLIntegrityConstraintViolationException: ORA-00001: unique constraint (DEV_OCS.PK_ROLEDEFINITION) violated
    at oracle.jdbc.driver.SQLStateMapping.newSQLException(SQLStateMapping.java:89)
    at oracle.jdbc.driver.DatabaseError.newSQLException(DatabaseError.java:135)
    at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:210)
    at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:473)
    at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:423)
    at oracle.jdbc.driver.T4C8Oall.receive(T4C8Oall.java:1095)
    at oracle.jdbc.driver.T4CStatement.doOall8(T4CStatement.java:193)
    at oracle.jdbc.driver.T4CStatement.executeForRows(T4CStatement.java:1028)
    at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1379)
    at oracle.jdbc.driver.OracleStatement.doScrollExecuteCommon(OracleStatement.java:5846)
    at oracle.jdbc.driver.OracleStatement.doScrollStmtExecuteQuery(OracleStatement.java:5989)
    at oracle.jdbc.driver.OracleStatement.executeUpdateInternal(OracleStatement.java:2012)
    at oracle.jdbc.driver.OracleStatement.executeUpdate(OracleStatement.java:1958)
    at oracle.jdbc.driver.OracleStatementWrapper.executeUpdate(OracleStatementWrapper.java:301)
    at weblogic.jdbc.wrapper.Statement.executeUpdate(Statement.java:503)
    at intradoc.jdbc.JdbcWorkspace.execute(JdbcWorkspace.java:564)
    ... 50 more
    I checked in database , the security group Test_111 is not present in ROLEDEFINITION table.
    What could be the issue?
    Regards,
    Minal

    1) Try importing CMU bundle with 'Overwrite Duplicates' option unchecked .
    2) In the CMU bundle, open file roles_guest.hda and see if 'guest' role has access to any group that start with special character or group you haven't created in the system..
    Eg: guest
    #AppsGroup
    0
    Also open securitygroups folder in CMU bundle, and see if you can find any groups that starts with special character or group you haven't created in the system.
    3) Identify that group and execute below query in the UCM database.
    select * from roledefinition where dgroupname= '#AppsGroup';
    Replace '#AppsGroup' with the groupname you identified.
    4) Solution would be to delete all the rows with dgroupname= '#AppsGroup' from the 'roledefinition' table.
    delete from roledefinition where dgroupname= '#AppsGroup';
    Replace '#AppsGroup' with the groupname you identified.

  • Creating New plant by copying Existing

    Dear All,
    I have to create a new plant by copying existing one, I want all the configuration that exists for existing plant in new plant.
    My query is by copying plant only all the configuration for new plant will get configured. Such as account determination etc or i have to configure it manually.
    Thanks
    Nitesh

    Hi
    no for account determination
    we have to put  valuvation gruoping code 0001 or any other which you want to use  manually
    in T-code OMWD
    for copying plant we van copy storage location ,assigement to companycode and oms2 setting also get automatically copy
    Regards
    Kailas

Maybe you are looking for

  • How can I Delete an Administrator Account??

    So I have a Mac Mini where Im the Administrator Account,(have a total of 3 accounts for sis and bro) but i have recently bought me a MacBook Pro, so i want to delete my old account on the Mac Mini, and make my sister the new Administrator...Does anyb

  • LR 4 suddenly no longer recognizes files.

    My LR 4 suddenly no longer recognized my memory cards. I am using the same cards and cameras but consisitently get the error message:"files not imported because the files could not be read. they are jpg and nef files...I am stuck without my workhorse

  • Selecting missing date range with previous records value

    Hello, SQL> select * from v$version; BANNER Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production PL/SQL Release 11.2.0.1.0 - Production CORE    11.2.0.1.0      Production TNS for 32-bit Windows: Version 11.2.0.1.0 - Production NLSRT

  • WPA-Enterprise - Satellite S1400

    I have a Satellite S1400 with integrated wireless (pre centrino). I'm trying to connect it to our Cisco WAPs which are configured with WPA-Enterprise. As XP doesn't have any built-in apps to handle WPA-Enterprise, we have been using the real good Int

  • Experiencing Recurring Kernel Panic

    So I've been experiencing recurring kernel panics for at least a month now and I seem to have narrowed down the issue to the RAM. Now I'm just not sure whether it's corrupt kernel files associated with the RAM or the actual hardware itself. I've run