[Solved] Reflect all changes to a filesystem in another filesystem

I'm looking for a solution to the following problem:
I want to setup an external harddisk so that two different partitions on it behave in the following way:
Given an ext4 partition mounted rw at /mnt/ext and a NTFS partition mounted also rw at /mnt/ntfs, any changes I do under /mnt/ext should be reflected exactly under /mnt/ntfs minus everything ext4 specific. The NTFS partition should work as an exact mirror of the ext4 partition, reflecting all writes, deletes, and changes in the directory structure.
The purpose of this is having a Linux native FS with all the journaling and stuff for normal use, while being able to go around in a Windows/Mac infested  environment and pass on data from the NTFS partition to random users without having to maintain/sync everything to the NTFS partition beforehand.
I have looked into union mounts, especially aufs3 as a seemingly up-to-date and actively maintained project, but got a little bit lost in the documentation because of its complexity.
From the LWN series on unioning filesystems (a little old) at http://lwn.net/Articles/396020 (under the section "background") I learned that union FSs are mostly used to stack one/or multiple read-write FS onto a read-only FS.
At least for aufs, its design documentation https://github.com/sfjro/aufs3-linux/bl … 1intro.txt states that it supports "multiple writable branches", but "several policies to select one among multiple writable branches", indicating that a certain change can be written only to one of several branches at the same time, which, as explained above, is not my intention.
Does somebody know whether such a setup can be realised using aufs or any other working union fs solution, or could point me towards a more simple and better solution? (Or at least tell me that it is impossible.)
Thanks in advance.
Last edited by 2ion (2013-08-19 22:35:57)

WonderWoofy wrote:
Yeah, aufs won't actually mirror the filesystem like you want here.  Honestly, I'm not entirely sure why you don't just use the NTFS filesystem to store these files anyway.  I mean, unless you have some sort of performance requriements for the files stored on the ext4 side, this seems unnnecessary.
But if this setup is a requirement for whatever you want to do here (for example if the NTFS is on removable media and the ext4 is internal) then I would just set up an rsync on some kind of cronjob or systemd.timer.  This of course would mean that you would have a kind of delay for when the actual changes occurred, but with a sufficiently low interval, I think it would be fine (not too low or you might end up with multiple rsync processes occuring at the same time).
Maybe someone else might have a better idea...
Yes, the reason I'm uncomfortable with using NTFS is that in the past, I once ended up with a damaged NTFS file system (invalid data read, errors on write) which to repair I found impossible using just the NTFS tools available on Linux. The internet basically said to go and use Windows to fix it, but I at the time didn't have a Windows system, and possibly won't have one in the future – thus I can't rely on NTFS. And I'd rather not run a Windows system utility under WINE.

Similar Messages

  • Change the tax code at order header level that reflect the change in all it

    Dear consultant,
    Change the tax code at order header level that reflect the change in all items lines under this order
    Facts:
    Define tax code,
    Assign it to bill & ship to customers,
    I do all setup in oracle receivable guide for defining tax
    Examples: I navigate to order management to create order
    First I select the customer and order type after that I navigate to tab line
    I enter the item in the first line the tax code coming by default
    I enter the second items line also the tax code coming by default etc three ,four, five until line 40
    Now I want to change the tax code for all items but not by enter and change it in each line? No,
    I want the way that I change Tax Code at order header after that the change is reflect in all items line
    Business Impacts:
    Suppose I create order include 40 items , I want to change the tax code that coming by default , that require me to enter in each line to change tax cod 40 times ,this not logic and not acceptable from my customer

    Hi,
    The defaulting rules apply only for the first time when you are adding new lines on to the Sales Order. Respective field vaues will be defaulted from the SO header level.
    In case, if you want to update all the 40 lines for Tax Code in one shot, then please select all the lines on the sales order, try Mass Change.
    (Navigation: Tools->Mass Change).
    Regards,
    Hemanth

  • WEBDAV rules repository not reflecting the changes

    WEBDAV rules repository does not reflect the changes unless i restart the application server. i dont have a problem with file repository because i can change the file repository using the link from the BPEL console and the changes are reflected immediately from the very next instance. .
    But in case of WEBDAV repository, even if i change the rules , they dont get reflected in any of the further instances unless i restart the server.
    Please help on this one.

    Another observation is that the entire bpel process(with decisionservice on webdav repository) works even if the webdav server is shut down.
    The webdav server needs to be up only for the first instance of the bpel process. After that, even shutting down the webdav server does not effect any further instances and they all run smoothly.
    This shows that the decison service builds up some kind of cache on the first call of the decisionservice. Some way to destroy this cache everytime the webdav repository changes will solve the problem.
    Any help on this topic is highly appreciated.

  • Changes not reflected in Change to Document (tcode CRMD_ORDER)

    Hello Expert,
    I made a change in field 'Planned Completion Date', in my transaction through tcode CRMD_ORDER.
    When I view Change to Document (CRMD_ORDER > top menu > Extras > Change Documents), the change is not reflected.
    All changes are usually captured in the Change to Document.
    Perhaps it is captured somewhere else?
    Please assist.
    Thank you.
    HJMY

    Hi HJMY,
    Check if the Change document object provided for CRM_ORDER has the field in the SAP provided structure. This can be verified in transaction SCDO -> select change document object CRM_ORDER and display, you will see list of structures on the left hand side, if the field you are refering to is a part of one of these structures then the changes in this field are displayed when you go to  (CRMD_ORDER > top menu > Extras > Change Documents). If it is not present then you will have to create a new change document object  and assign a proper structure, you can generate update program to update that change documents. Hope this helps.
    Thanks,
    Priyanka

  • JSF page fails to reflect saved changes to model

    How do I refresh the info kept in the JSF backing bean in order to reflect changes saved to the model?
    My JSF page contains rows of data. Depending on the user’s choices, one or more rows may be eliminated from the model. My problem is the change in the model is never reflected in subsequent page displays, unless I kill the session. I’m sure this behavior is because the backing bean has a scope of session and the data loaded into the session bean never changes to reflect the change in the model. Invalidating the session kills everything including stuff I want to keep around (like logon info and other stuff that will really never change during the session). Changing the backing bean scope to request breaks the update completely.
    Help!!!
    Regards,
    Al Malin

    Thanks for the reply.
    I have (actually had) two problems. The first one I've solved since my initial post (stupid error on my part).
    The other unsolved problem is JSF pages displaying stale data. User navigates to first page, navigates to second page, at this point the database changes, and then the user navigates back to first page. Navigating back to the first page shows the data how it was, not how it is, because it's from the session bean and not the database. If I could kill the session bean when the user navigates from the page the problem would be solved because I build the inital display in the constructor (BTW, please advise if this is a bad approach).
    Frank, your reply raised a question. Is there a way to specify a refresh frequency? If so, how?
    Thanks,
    Al Malin

  • Disk order changes, grub problem, filesystem check failed

    I'm having some problems installing Archlinux onto a machine with a lot of SATA drives, some connected by SATA cards. I have tried the 2009-08 netinstaller burned to CD, both -x86_64.iso and -x86_64-isolinux.iso, and I have the same problem with both.
    I am installing from a USB-connect optical drive to an Intel X25-M 80GB SSD connected to a motherboard SATA port. I also have 2 HDDs connected to motherboard SATA ports, and 4 more HDDs connected to 4 SIL-based PCIe SATA cards. Additionally, I have a 4GB Patriot USB flash drive connected to a motherboard USB port. None of the HDDs have a bootable MBR, I am planning to create an mdadm RAID with the HDDs, but the USB flash drive is bootable. The motherboard is an Asus Z8PE-D18 with the latest BIOS, in AHCI mode.
    First thing to note is that I was able to successfully install Fedora 12 linux to this machine in the exact configuration that I am trying to install Archlinux. I just installed Fedora 12 again last night, and it installed and loaded fine when I rebooted from the SSD.
    So, the problems I am having with Archlinux. During install, I found that the SSD is showing up as either /dev/sde or /dev/sdf. This is odd since it was /dev/sda with Fedora. It makes sense for it to be /dev/sda, since it is on the first motherboard SATA port. But I proceeded with the Archlinux install, and grub seemed to detect the stage1 location properly -- root was set to (hda4,0) or (hda5,0) depending on whether the SSD was at sde or sdf. The kernel root was configured by UUID, so that does not depend on the drive order. Okay so far.
    The problem shows up on reboot. The bootloader immediately complains that there is no such partition sde1 or sdf1. I drop into a grub command line and do
    find /boot/grub/stage1
    and it replies with (hd0,0), so I modify the boot line to root (hd0,0) and boot. Now it gets pretty far. Lots of boot messages scroll by. Here are some of the last few before the problem:
    Waiting 10 seconds for device /dev/disk/by-uuid/22a35aa2-9799-4575-b1eb-456e819a1a26 ...
    kjournald starting.  Commit interval 5 seconds
    EXT3-fs (sde1): mounted filesystem with writeback data mode
    INIT: version 2.86 booting
    ::Starting UDev DAemon
    ::Triggering UDev uevents
    ::Loading Modules
    ::Waiting for UDev uevents to be processed
    ::Bringing up loopback interface
    ::Mounting Root Read-only
    ::Checking Filesystems
    /dev/sdf1:
    The superblock could not be read or does not describe a correct ext2
    filesystem. If the device is valid and it really contains an ext2
    filesystem (and not swap or ufs or something else), then the superblock
    is corrupt, and you might try running e2fsck with an alternate superblock
    **** FILESYSTEM CHECK FAILED
    * Please repair manually and reboot. Note that the root file system
    * is currently mounted read-only....
    Give root password for maintenance
    So I logged in as root and did an fdisk -l. The boot SSD was at /dev/sde. The menu.lst has root as (hd5,0), which would be sdf (which was correct during installation, but the disk order apparently changed). The kernel root= in menu.lst used by-uuid, and it at least points to the correct drive, which I suppose is why I was able to boot as far as I did, but when it tries to mount the root filesystem, it fails as shown above.
    So, at initial grub boot, the grub stage1 is found at (hd0,0). During installation, the SSD was sdf, but after booting the kernel, the SSD is sde. What is going on?
    One other experiment  is that I pulled all the drives (including USB flash drive) except the SSD. The HDDs are in hot-swap slots, so that was easy. The PCIe SATA cards are still plugged into the PCIe slots. Then I was able to successfully install and boot Archlinux. But when I plugged the drives back in and rebooted, I had the same problem as detailed above.
    Any suggestions on how to fix this?

    I had the same problem.
    Last week I installed Arch onto a new SATA HD. I wanted to make sure the installation worked before I attached the other drives. On booting with the other drives attached, similar messages.
    My solution:
    Login as root.
    Follow the instruction to mount / as read-write so you can make changes to the filesystem.
    edit /etc/fstab and eliminate the references to /dev/sdxx and replace them with UUIDs or labels (as suggested above) as these won't change.
    in /etc/fstab ...
    # external data sources
    #data /dev/sdb6
    UUID=931d7107-1241-4d82-ad28-fcbe7af8ba69 /data ext3 defaults 0 0
    #Documents /dev/sda9
    /dev/disk/by-label/Documents /data/Documents ext3 defaults 0 0
    Reboot and you should be good.
    You can find the UUID of the drives by using
    $ blkid
    or you can set a drive label with e2label, assuming you are using ext2,3 or 4
    Good luck.

  • How it is possible to reflect workbench changes on clustered environment

    Hi All,
    I am running endeca on MachineA and MachineB with separate MDEX engine.
    Cluster dgraph is implemented on MachineA with MachineB, thus data will updated on machineB when i run baseline update on MachineA.
    I have installed Experience Manager on MachineA and created some pages using Workbench.
    The rules are getting fired for MachineA without baseline update, but i noticed that same are not working for MachineB even though both machines are in cluster.
    When i run baseline update on MachineA rules are working with MachineB.
    How it is possible to reflect workbench changes on both clustered MDEX engine without running baseline update.
    Please share your suggestion.
    Thanks in Advance,
    SunilN

    Hi Guys,
    I have tried to both approaches which you have suggested me.
    But still the rules are not fired for MachineB.
    I have tested it endeca_jspref on MachineB it the rules are not getting reflected for MachineB.
    Below is my MachineA AppConfig.xml file :
    <?xml version="1.0" encoding="UTF-8"?>
    <!--
    # This file contains settings for an EAC application.
    -->
    <spr:beans xmlns:spr="http://www.springframework.org/schema/beans"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xmlns:tx="http://www.springframework.org/schema/tx"
    xmlns:aop="http://www.springframework.org/schema/aop"
    xmlns="http://www.endeca.com/schema/eacToolkit"
    xsi:schemaLocation="
    http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-2.0.xsd
    http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.0.xsd
    http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-2.0.xsd
    http://www.endeca.com/schema/eacToolkit http://www.endeca.com/schema/eacToolkit/eacToolkit.xsd">
    <app appName="WineStore" eacHost="MachineA" eacPort="8888"
    dataPrefix="WineStore" sslEnabled="false" lockManager="LockManager">
    <working-dir>${ENDECA_PROJECT_DIR}</working-dir>
    <log-dir>./logs</log-dir>
    </app>
    <host id="ITLHost" hostName="MachineA" port="8888" />
    <host id="MDEXHost" hostName="MachineA" port="8888" />
    <host id="MDEXHost2" hostName="MachineB" port="8888" />
    <host id="webstudio" hostName="MachineA" port="8888" >
    <directories>
    <directory name="webstudio-report-dir">./reports</directory>
    </directories>
    </host>
    <lock-manager id="LockManager" releaseLocksOnFailure="true" />
    <script id="InitialSetup">
    <bean-shell-script>
    <![CDATA[
        if (ConfigManager.isWebStudioEnabled()) {
          log.info("Updating Oracle Endeca Workbench configuration...");
          ConfigManager.updateWsConfig();
          log.info("Finished updating Oracle Endeca Workbench.");
    ]]>
    </bean-shell-script>
    </script>
    <script id="BaselineUpdate">
    <log-dir>./logs/provisioned_scripts</log-dir>
    <provisioned-script-command>./control/baseline_update.bat</provisioned-script-command>
    <bean-shell-script>
    <![CDATA[
        log.info("Starting baseline update script.");
        // obtain lock
        if (LockManager.acquireLock("update_lock")) {
          // test if data is ready for processing
          if (Forge.isDataReady()) {
            if (ConfigManager.isWebStudioEnabled()) {
              // get Web Studio config, merge with Dev Studio config
              ConfigManager.downloadWsConfig();
              ConfigManager.fetchMergedConfig();
            } else {
    ConfigManager.fetchDsConfig();
    // clean directories
    Forge.cleanDirs();
    PartialForge.cleanCumulativePartials();
    Dgidx.cleanDirs();
    // fetch extracted data files to forge input
    Forge.getIncomingData();
    LockManager.removeFlag("baseline_data_ready");
    // fetch config files to forge input
    Forge.getConfig();
    // archive logs and run ITL
    Forge.archiveLogDir();
    Forge.run();
    Dgidx.archiveLogDir();
    Dgidx.run();
    // distributed index, update Dgraphs
    DistributeIndexAndApply.run();
    // if Web Studio is integrated, update Web Studio with latest
    // dimension values
    if (ConfigManager.isWebStudioEnabled()) {
    ConfigManager.cleanDirs();
    Forge.getPostForgeDimensions();
    ConfigManager.updateWsDimensions();
    // archive state files, index
    Forge.archiveState();
    Dgidx.archiveIndex();
    // (start or) cycle the LogServer
    LogServer.cycle();
    } else {
    log.warning("Baseline data not ready for processing.");
    // release lock
    LockManager.releaseLock("update_lock");
    log.info("Baseline update script finished.");
    } else {
    log.warning("Failed to obtain lock.");
    ]]>
    </bean-shell-script>
    </script>
    <script id="DistributeIndexAndApply">
    <bean-shell-script>
    <![CDATA[
        DgraphCluster.cleanDirs();
        DgraphCluster.copyIndexToDgraphServers();
        DgraphCluster.applyIndex();
          ]]>
    </bean-shell-script>
    </script>
    <script id="LoadXQueryModules">
    <bean-shell-script>
    <![CDATA[
        DgraphCluster.cleanLocalXQueryDirs();
        DgraphCluster.copyXQueryToDgraphServers();
        DgraphCluster.reloadXqueryModules();
          ]]>
    </bean-shell-script>
    </script>
    <script id="ConfigUpdate">
    <log-dir>./logs/provisioned_scripts</log-dir>
    <provisioned-script-command>./control/runcommand.bat ConfigUpdate run</provisioned-script-command>
    <bean-shell-script>
    <![CDATA[
        log.info("Starting dgraph config update script.");
        if (ConfigManager.isWebStudioEnabled()) {
          ConfigManager.downloadWsDgraphConfig();
          DgraphCluster.cleanLocalDgraphConfigDirs();
          DgraphCluster.copyDgraphConfigToDgraphServers();
          DgraphCluster.applyConfigUpdate();
        } else {
    log.warning("Web Studio integration is disabled. No action will be taken.");
    log.info("Finished updating dgraph config.");
    ]]>
    </bean-shell-script>
    </script>
    <custom-component id="ConfigManager" host-id="ITLHost" class="com.endeca.soleng.eac.toolkit.component.ConfigManagerComponent">
    <properties>
    <property name="webStudioEnabled" value="true" />
    <property name="webStudioHost" value="MachineA" />
    <property name="webStudioPort" value="8006" />
    <property name="webStudioMaintainedFile1" value="thesaurus.xml" />
    <property name="webStudioMaintainedFile2" value="merch_rule_group_default.xml" />
    <property name="webStudioMaintainedFile3" value="merch_rule_group_default_redirects.xml" />
         <property name="webStudioMaintainedFile4" value="merch_rule_group_MobilePages.xml"/>
         <property name="webStudioMaintainedFile5" value="merch_rule_group_NavigationPages.xml"/>
         <property name="webStudioMaintainedFile6" value="merch_rule_group_SearchPages.xml"/>
    </properties>
    <directories>
    <directory name="devStudioConfigDir">./config/pipeline</directory>
    <directory name="webStudioConfigDir">./data/web_studio/config</directory>
    <directory name="webStudioDgraphConfigDir">./data/web_studio/dgraph_config</directory>
    <directory name="mergedConfigDir">./data/complete_index_config</directory>
    <directory name="webStudioTempDir">./data/web_studio/temp</directory>
    </directories>
    </custom-component>
    <forge id="Forge" host-id="ITLHost">
    <properties>
    <property name="numStateBackups" value="10" />
    <property name="numLogBackups" value="10" />
    </properties>
    <directories>
    <directory name="incomingDataDir">./data/incoming</directory>
    <directory name="configDir">./data/complete_index_config</directory>
    <directory name="wsTempDir">./data/web_studio/temp</directory>
    </directories>
    <args>
    <arg>-vw</arg>
    </args>
    <log-dir>./logs/forges/Forge</log-dir>
    <input-dir>./data/processing</input-dir>
    <output-dir>./data/forge_output</output-dir>
    <state-dir>./data/state</state-dir>
    <temp-dir>./data/temp</temp-dir>
    <num-partitions>1</num-partitions>
    <pipeline-file>./data/processing/pipeline.epx</pipeline-file>
    </forge>
    <dgidx id="Dgidx" host-id="ITLHost">
    <properties>
    <property name="numLogBackups" value="10" />
    <property name="numIndexBackups" value="3" />
    </properties>
    <args>
    <arg>-v</arg>
    </args>
    <log-dir>./logs/dgidxs/Dgidx</log-dir>
    <input-dir>./data/forge_output</input-dir>
    <output-dir>./data/dgidx_output</output-dir>
    <temp-dir>./data/temp</temp-dir>
    <run-aspell>true</run-aspell>
    </dgidx>
    <dgraph-cluster id="DgraphCluster" getDataInParallel="true">
    <dgraph ref="Dgraph1" />
    <dgraph ref="Dgraph2" />
         <dgraph ref="Dgraph3" />
    </dgraph-cluster>
    <dgraph-defaults>
    <properties>
    <property name="srcIndexDir" value="./data/dgidx_output" />
    <property name="srcIndexHostId" value="ITLHost" />
    <property name="srcPartialsDir" value="./data/partials/forge_output" />
    <property name="srcPartialsHostId" value="ITLHost" />
    <property name="srcCumulativePartialsDir" value="./data/partials/cumulative_partials" />
    <property name="srcCumulativePartialsHostId" value="ITLHost" />
    <property name="srcDgraphConfigDir" value="./data/web_studio/dgraph_config" />
    <property name="srcDgraphConfigHostId" value="ITLHost" />
    <property name="srcXQueryHostId" value="ITLHost" />
    <property name="srcXQueryDir" value="./config/lib/xquery" />
    <property name="numLogBackups" value="10" />
    <property name="shutdownTimeout" value="30" />
    <property name="numIdleSecondsAfterStop" value="0" />
    </properties>
    <directories>
    <directory name="localIndexDir">./data/dgraphs/local_dgraph_input</directory>
    <directory name="localCumulativePartialsDir">./data/dgraphs/local_cumulative_partials</directory>
    <directory name="localDgraphConfigDir">./data/dgraphs/local_dgraph_config</directory>
    <directory name="localXQueryDir">./data/dgraphs/local_xquery</directory>
    </directories>
    <args>
    <arg>--threads</arg>
    <arg>2</arg>
    <arg>--spl</arg>
    <arg>--dym</arg>
    <arg>--xquery_path</arg>
    <arg>./data/dgraphs/local_xquery</arg>
    </args>
    <startup-timeout>120</startup-timeout>
    </dgraph-defaults>
    <dgraph id="Dgraph1" host-id="MDEXHost" port="15000">
    <properties>
    <property name="restartGroup" value="A" />
    <property name="updateGroup" value="a" />
    </properties>
    <log-dir>./logs/dgraphs/Dgraph1</log-dir>
    <input-dir>./data/dgraphs/Dgraph1/dgraph_input</input-dir>
    <update-dir>./data/dgraphs/Dgraph1/dgraph_input/updates</update-dir>
    </dgraph>
    <dgraph id="Dgraph2" host-id="MDEXHost" port="15001">
    <properties>
    <property name="restartGroup" value="B" />
    <property name="updateGroup" value="a" />
    </properties>
    <log-dir>./logs/dgraphs/Dgraph2</log-dir>
    <input-dir>./data/dgraphs/Dgraph2/dgraph_input</input-dir>
    <update-dir>./data/dgraphs/Dgraph2/dgraph_input/updates</update-dir>
    </dgraph>
    <dgraph id="Dgraph3" host-id="MDEXHost2" port="15000">
    <properties>
    <property name="restartGroup" value="B" />
    <property name="updateGroup" value="a" />
    </properties>
    <log-dir>./logs/dgraphs/Dgraph3</log-dir>
    <input-dir>./data/dgraphs/Dgraph3/dgraph_input</input-dir>
    <update-dir>./data/dgraphs/Dgraph3/dgraph_input/updates</update-dir>
    </dgraph>
    </spr:beans>
    Do i need to change any things else.
    Please suggest me.
    Thanks
    SunilN

  • [svn] 3839: Update the channel url to reflect the change in the runtime channel

    Revision: 3839
    Author: [email protected]
    Date: 2008-10-23 07:42:37 -0700 (Thu, 23 Oct 2008)
    Log Message:
    Update the channel url to reflect the change in the runtime channel
    Modified Paths:
    blazeds/branches/3.0.x/qa/apps/qa-manual/ajax/messaging/TextMessageRuntimeDest.html

    Many ways to do this. The easiest is to have a method in one of your classes that reads the data from the database and creates all the proper structure under a RootNode and then returns that RootNode.
    Whenever you want to refresh the tree, just call that method to recreate the root node (and all the underlying structure/nodes) and then re-set the root node of your tree model using the setRoot() method. That will cause it to refresh the display given the new root node.
    Once you have that working, you can get fancier and make it more efficient (only updating/firing events for the nodes that actually changed, etc).

  • My icons all changed from the program icon to one Microsoft Office icon

    I was reviewing emails and was reading one....I have no idea what I did, but all the desktop icons for my various programs were all changed to one icon....which is an icon for Microsoft Office 2010.  I have tried to figure out what I need to do to reset them to what they were originally.  Does anyone know how to do this?  Can the explaination be in user friendly terminology, please.

    1. Open Windows Explorer (any folder/drive).
    2. As the IconCache is a hidden file, you need to enable “Show hidden files” option to see the same. To do this, head over to Tools > Folder Options, switch to View tab, and finally enable Show Hidden files, folders, and drives option.
    3. Now navigate to C:\Users\<username>\AppData\Local folder and then delete IconCache.db file. Here username is your user profile name.
    4. Reboot your computer to rebuild the icon cache.
    5. All icons should be displayed correct now.
    Alternately, you can download a tool from the following weblink that will do this for you:
    http://fc02.deviantart.net/fs70/f/2010/141/0/e/Rebuild_Icon_Cache_v0_7b_by_screeny05.rar
    The tool has three options, one to rebuild the icon cache, another one to restore the original icon cache file and third option to delete the back up file of icon cache. When you rebuild the cache, it takes a back up of existing IconCache.db.
    I am an HP employee.
    Regards,
    Vidya
    Make it easier for other people to find solutions, by marking my answer “Accept as Solution” if it solves your problem.
    ***Click on "Thumbs up" button to the bottom right side of my post to say thanks!***

  • SSPR - Unlock User - No policy grants the Requestor permission to complete all changes.

    When trying to unlock a user in FIM Portal I get the below error with FIM Admin account.
    Error processing your request: The operation was rejected because of access control policies.
    Reason: The operation failed as a result of insufficient access rights.
    Attributes: GateData
    Correlation Id: eda9f21c-a777-4ef2-b12f-25e82aef7973
    Request Id: 
    Details: No policy grants the Requestor permission to complete all changes.
    Any ideas?

    You need to update the MPR for Administration: Administrators can read and update Users and under the Target Resources tab, add the Attribute GateData in the Attributes Box.
    If you are doing this through the Sync Engine, also do the same in the MPR
    Synchronization: Synchronization account controls users
    it synchronizes
    That should solve the problem.
    You need to do this for all the attributes you get the error for. FIM does not give all the attributes that it fails with insufficient rights, it fails at the first attribute, so once you have solved this attribute there may be others generating the same
    error. So watchout for that Attributes: GateData it may change, so any attribute that fails you need to follow the above streps.

  • I'm working with a Mac, with 10.10.2 system.  My Audition program has been working fine for years now.  But, there is a problem I can't solve.  All my markers on a given piece of audio allows me to left click to highlight the time signature until my audio

    I'm working with a Mac, with 10.10.2 system.  My Audition program has been working fine for years now.  But, there is a problem I can't solve.  All my markers on a given piece of audio allows me to left click to highlight the time signature until my audio reaches the one hour point.  From there on, the left click will not give me the drop down menu in order to copy and then paste into an Excel sheet I then have to submit for the proofing purposes.  After the one hour mark, I can only control/c in order to highlight, then when I slide up to my Excel sheet, I'm able to right click and paste.  Why is the program not, all of a su allowing me to left cllick the mouse and have a drop down menu give me the option to "copy, " as it does for any time signature markers up to 1:00:00.000?

    Which version of Audition? With the latest version of Audition running on a Windows 7 machine I can't get a dropdown menu at all when I left click on the time in Markers List. The only way to do it is with cntrl-c.

  • How to remove all changes made to an image in Camera Raw -

    In CS4 with latest ver. of Camera Raw, I think there is a way to remove all changes previously made to an image while using Camera Raw.
    In other words after doing this the little icon in the upper right hand corner of an image in Bridge that indicates changes have been applied to an image, should be gone. Another way of asking this would be to say, how can I start all over again with the original image in Camera Raw by removing all changes prefiously made to it in Camera Raw?
    TIA

    Sorry, I don't know the answer to that for sure*.  As a certified old geezer who avoids cluttering his mind with stuff that can easily be looked up in a menu, I stay away from keyboard shortcuts as much as I can, using only those applicable system wide, the most obvious ones in Photoshop like deselect, etc.
    * However, since no such shortcut appears next to the menu command, I doubt there is one.
    You could create an action and assign it a key command, though.  Doh!
    Never mind.  That was a total brain short circuit.  You can't create actions in Bridge.  You'll need a script.  Ask in the Bridge Scripting forum.
    Message was edited by: Ramón G Castañeda

  • How to see all changes on Project

    Hello,
    How I can see all changes done on one Project, including description on project level, budget changed, description and budget on WBS, all networks changes, hours booked, etc. I need date of change and who did that. In CN60 I see something, but I need also budget changes (CJI8) in the same view. It is possible?
    Thank you.
    Rodica

    Hello Rodica,
    You can also try checking report S_AR_87013558
    Once you execute this report, you will see Budget amount shown for specific Project / WBS.
    Click on this Budget amount and go to menu Goto-Line items.
    You will get to see Display Budget Line items for projects showing text, history of Budget amount datewise.
    Check this and see if it helps you.
    Regards
    Tushar

  • Function module for getting all changes to BPs Business Partners for today

    Can any body knows function module to find out all changed documents to the BPs depending upon the date.
    Requirement is i have to find out the all BPs changed today in transaction FPP3.how can i find all those changed documents . Is there any FM to find out.
    Vineel

    Hi Vineel,
    For debugging: GoTo /n Click on the right most icon which is Customizing of local layout then click on create shortcut a screen will displayin that goto Application section select system command from the dropdown in Type, then write /h in command press OK a short cut will be automatically created on your desktop now drag this debugger on that pop up in TXn FPP3 and play with the debugger......
    Hope this will help..

  • Listing order and dates of my photos have all changed and now it is a mess in my photo library! I tried to list them again by date, but it didn't fix the problem. What to do?

    Listing order and dates of my photos have all changed and now it is a mess in my photo library! I tried to list them again by date, but it didn't fix the problem. What to do?

    If your preferences keep getting messed up try this:   make a temporary, backup copy (if you don't already have a backup copy) of the library and try the following:
    1 - delete the iPhoto preference file, com.apple.iPhoto.plist, that resides in your
         User/Home()/Library/ Preferences folder.
    2 - delete iPhoto's cache file, Cache.db, that is located in your
         User/Home()/Library/Caches/com.apple.iPhoto folder. 
    Click to view full size
    3 - launch iPhoto and try again.
    NOTE: If you're moved your library from its default location in your Home/Pictures folder you will have to point iPhoto to its new location when you next open iPhoto by holding down the Option key when launching iPhoto.  You'll also have to reset the iPhoto's various preferences.
    Happy Holidays

Maybe you are looking for

  • Requirement of invoice list in case of a proforma invoice generation.

    Dear Experts, I have a case where the customer is indulged with the export sales & he had delivered the excisable goods to his customer (or we can also consider it under STO as one of the above case) & wants to arise a proforma invoice in context to

  • Cannot burn photos to dual-layer DVD

    I'm trying to use iPhoto's Share->Burn menu, on a recent Mac Pro. It doesn't get far before failing the burn (I don't think it burns any data at all). The error I get is: Burn Failed The burn to the OPTIARC DVD RW AD-7170A drive failed. Double Layer

  • Exist any API to Create a new Costing for employee?

    I want to add new Costing for employees using API's , Exist any API to Create a new Costing for employee?

  • Lock screens on the Ipad2?

    Is there a way to lock screens on the Ipad 2?  I would like to move my work related apps to a seperate screen that could be password protected so that my pre-schooler can't access by "accident" when playing apps that I have downloaded for him on my I

  • Firefox opens Excel spreadsheet when IE won't.

    This isn't a Firefox problem, but I am hoping for some insight. On our company Intranet we have links to some Excel spreadsheets. Firefox opens these links with no problem whatsoever, but IE fails randomly saying the file is corrupt. I just inherited