Best Practices to update Cascading Picklist mapping for Account record type

1. Most of the existing picklist values name in parent and related picklist has been modified in external app master list, so the same needs to be updated in CRMOD.
2. If we need to update picklist value, do we need to DISABLE the existing value and CREATE a new picklist.
3. Is there any Best Practices to avoid doing Manual Cascading picklist mapping for Account record type? because we have around 500 picklist values to be mapped with parent and related picklist.
Thanks!

Mahesh, I would recommend disabling the existing values and create new ones. This means manually remapping the cascading picklists.

Similar Messages

  • JoinFieldValue for Account record type.

    I am trying to validate Account record type field using below JFV function. Will this work,
    JoinFieldValue('<Opportunity>', '<OpportunityId>', '<IndexedPick0>') <> 'Yes' , display customer error message.
    Sundar

    thanks for reply..
    I have field "Result" (IndexedPick0) in opportunity. User can modify Account type to Customer only if associated opportunity Result is "Yes - Pick Up". otherwise display customer error message. To configure this, I want to get opportunity Result field value using JVF and display error message.
    JoinFieldValue('<Opportunity>', '<OpportunityId>', '<IndexedPick0>') <> 'Yes - Pick Up' in Account Type field

  • EBS Supplier best practice to update vendor site code, update or create a new one

    I have a question related to EBS Supplier vendor site code. Application lets you update the vendor site code, but what is the best practice to update the site code?....would you inactivate the exiting one and create a new one? or would you just update the existing value?

    Ok,
    My workaround was to put in my TaskFlow an action to commit. After that I put two more actions (execute) and then back to my page. This way works but I would like to know if there is any more efficient way to do this just when I am inserting.
    Regards

  • When trying to update apps it asks for account password but it doesn't show my account. Help!

    When trying to update apps it asks for account password but it doesn't show my account ID, it shows my dad's. How can I fix this so I can sign in?

    It sounds like the apps were purchased using your dads account, so they will have to be updated using that account.

  • "An error occurred while updating the default player for audio file types"

    Getting ready for a deployment of version 9.2 of iTunes for 1400+ users.
    Getting this error when a 'normal' user goes into Edit/Preferences of iTunes and makes no changes.
    "An error occurred while updating the default player for audio file types. you do not have enough access privileges for this operation."
    I've elevated privileges of the C:\program files\itunes directory for this user to Full Control however problem remains.
    Tried to modify the folder to remove 'read-only', as found this suggestion on here, but read-only keeps applying itself.
    User has full rights to the location of the media as specified under Advanced Tab. If I tick 'Set as default media player' and remove tick and then click OK I do not get the error. From what I've read it's affected older versions of iTunes also.
    Doe anyone have any other suggestions?
    Thanks in advance.

    I had a similar issue. Turned out that - somehow - one of the directories or files in my iTunes music folder had gotten set to read only. Fixing it was easy.
    Go to Edit/Preferences.
    Choose the Advanced tab
    in the General subtab select the contents of the box labeled "iTunes Music folder location"
    Click Start, choose Run
    Paste the location
    Press the button to move up one level
    Right-click the music folder, choose Properties
    If this is your problem, the read-only box will be a solid color or checked. Set it to empty and choose the default "apply to folder and all subfolders and files." This will set everything in there readable, which after I did so I stopped getting the error.

  • Upgraded to 10.3.1.55, now can't set iT as default player,"An error occurred when updating the default player for audio files types.  You do not have enough access privileges for this operation." How to fix?

    After I upgraded iTunes, iTunes doesn't recognize CD in the drive, apparently isn't the default player.  It won't let me set iTunes as the default player, I get the error message "An error occurred when updating the default player for audio files types."  I tried Whitesides' remedy (changing Read Only status in folder in Windows) but it didn't solve the problem.  Any suggestions?  Thanks.
    J

    Oddly enough. I think I just solved it. It looks like, for some reason, my computer has about 8 different "iTunes music" folders, and I've been saving my music to the wrong one. Neat. Music now imports and plays as it should. Copying things over is going to be so fun tonight!

  • Where can I find the user key precedence hierarchy for each record type?

    Example: I want to update contact records through the CRMOD web service API.
    So I'm looking at the "Oracle Web Services On Demand Guide, Version 6.0 (released August 2010)", page 316, and it lists 3 user keys for Contact.wsdl v2.0 in the following order:
    1. FirstName and LastName
    2. Id
    3. ExternalSystemId
    From what I can see, this order does not seem to reflect the precedence hierarchy of these 3 user keys.
    I've send in a test update where I supplied a FN, LN, and EUID, ... and the contact that matched the EUID got updated.
    (I'm glad it did, because EUID really needs to take precedence over FN+LN, otherwise you could never change a contact's last name without knowing the contact's Row Id.)
    Does anyone know where I can find the precedence hierarchy for each record type's user keys (other than doing the obvious and time consuming "try+error")?

    Hi,
    we experienced similar problems with the account object and asked the oracle support about this. This was their answer:
    "[...] thank you for contacting CRM On Demand Customer Care. Regarding your question, please note the below: when perfoming a query, the user key fields are looked for in this order: - Row id - External System Id - AccountName and Location. Basically, the search will be performed by AccountName and Location only when the other fields are missing. This is an expected behavior because, the Row Id is the strongest filter as it is always unique. The external system Id comes second, as it is supposed to be unique in another system."
    So, I guess the order is always
    1) Row Id
    2) External System Id
    3) specific field combinations...
    kind regards
    Kai
    Edited by: Kai Hartmann on 28.04.2011 07:10

  • Powerbook G4 10.4.11  won't start from hard drive .  Tried repair, " The underlying task reported failure on exit (-9972).Invalid sibling link,invalid B tree header, invalid map node,invalid record type,the volume needs to be repaired.

    Powerbook G4 10.4.11  won't start from hard drive .  Tried repair, " Invalid sibling link,invalid B tree header, invalid map node,invalid record type,the volume needs to be repaired.Powerbook G4 10.4.11  won't start from hard drive .  Tried repair, " The underlying task reported failure on exit (-9972).Invalid sibling link,invalid B tree header, invalid map node,invalid record type,the volume needs to be repaired.
    The underlying task reported failure on exit (-9972).

    kauribill wrote:
    " The underlying task reported failure on exit (-9972).Invalid sibling link,invalid B tree header, invalid map node,invalid record type,the volume needs to be repaired.
    The underlying task reported failure on exit (-9972).
    This is a directory issue that Disk Utility cannot fix. Although it manifests itself as a software issue sometimes it may be hardware based. See DiskUtility reports "Underlying task reported failure" when repairing avolume http://support.apple.com/kb/TS1901?viewlocale=en_US". You can try using a utility like TechTool Pro, Drive Genius or Disk Warrior to repair and replace the directory. Another option would be to use the Archive and Install feature to reinstall. If the problem returns after correction you may have a failing or failed HDD.
    cornelius

  • Best Practice on Updating From a DB

    Hi Everyone,
    What are some best practices surrounding getting data from an oracle database into the cache layer when a data change event (insert, update, delete) happens? I've searched far and wide and the best answer I can find is to use Extractor/Replicator -> JMS -> Subscriber -> cache.
    Thank you for your help.

    You're right, DCN is interesting idea, but it's again the case where technology is working on simple Hello World things, but fails to deliver on real word.
    To me DCN looks like an unfinished Oracle project, lot of marketing stuff, but poor features, it's good mostly to student's works or testlabs, but not for real world complexity.
    Two reasons:
    1.DCN has severe limitations on complexity of joins and queries in case you plan to use query change notification feature.
    2. it puts too bug pressure on database by creating a tons on events, when you don't need and don't expect them, because it's too generic.
    Instead of DCN, create ordinary Oracle AQ queues, using tiny SQL object type event as a payload, then create triggers and/or PL/SQL stored procedures, which ale filling the event with all the primary keys you need and the unique ID of the object you need to extract.
    Triggers will filter out unnesessary updates, sending events only when you wish.
    If conditions are too complex for triggers, you may create & place events either by call from the event source app itself or on scheduled basis, it's entirely up to you. Also, technique with creating object views + using instead of trigger on this object view works pretty well.
    And finally, implement listener at Coherence side, which will be reading the event, making necessary extracts & assemble Java object ready to be placed into the cache, based on the event ID and set of event's primary keys. After Java object will be assembled, you can place it into the cache.
    Don't use Hibernate, TopLink or any other relational-to-object frameworks, they're too slow and add excessive and unnecessary overhead to the process, use standard Oracle database features, they're much faster and transaction-safe. Usage of these frameworks within 10g or 11g database is obsolete and caused mainly by lack of knowledge among Java developers about database features on this regard.
    In order to make a whole system fail-safe and scalable, you have to implement listener in fail-safe fashion, in a form of workmanager + slave processes, spawned on the other nodes.Work manager has to be auto fail-safe and auto scalable, so that if the node holding work manager instance fails due to cache cluster member departure or reset or smth else, another workmanager is automatically spawned on first available node.
    Also, workmanager should spread & synchronize the work among the slave listener processes based on the current cache cluster members, automatically re-balancing and recovering work in case of cache member join/departure.
    Out-of-the box Coherence has an implementation of workmanager, but it's not fail-safe and does not provide automatic scale-up/recover work features described above, so you have to implement your own.
    All the features I've described are implemented and happily used in complex OLTP + workflow system backed up by big Oracle RAC cluster with huge workload, processing millions transactions per day.

  • Best practice to update to Snow Leopard

    I just placed my family pack order on Amazon.com for Snow Leopard. But this will be the first time for me doing an OS upgrade on a Mac (all 4 Macs in my house came with Leopard on them so we've only done the "software update" variety). I am a reformed PC guy so humor me!
    What is the best practice to upgrade from 10.5.8 to 10.6? On a PC, my inclination would be to back up my data, reformat the whole drive and install Windows fresh... then all my apps.. then the data. I hate that and it takes hours.
    What is the best practice way to upgrade the Mac OS?

    The best option is Erase and Install. The next best option is Archive and Install. Use the latter if you do not want to or can't erase your startup volume.
    How to Perform an Archive and Install
    An Archive and Install will NOT erase your hard drive, but you must have sufficient free space for a second OS X installation which could be from 3-9 GBs depending upon the version of OS X and selected installation options. The free space requirement is over and above normal free space requirements which should be at least 6-10 GBs. Read all the linked references carefully before proceeding.
    1. Be sure to use Disk Utility first to repair the disk before performing the Archive and Install.
    Repairing the Hard Drive and Permissions
    Boot from your OS X Installer disc. After the installer loads select your language and click on the Continue button. When the menu bar appears select Disk Utility from the Installer menu (Utilities menu for Tiger.) After DU loads select your hard drive entry (mfgr.'s ID and drive size) from the the left side list. In the DU status area you will see an entry for the S.M.A.R.T. status of the hard drive. If it does not say "Verified" then the hard drive is failing or failed. (SMART status is not reported on external Firewire or USB drives.) If the drive is "Verified" then select your OS X volume from the list on the left (sub-entry below the drive entry,) click on the First Aid tab, then click on the Repair Disk button. If DU reports any errors that have been fixed, then re-run Repair Disk until no errors are reported. If no errors are reported, then quit DU and return to the installer.
    2. Do not proceed with an Archive and Install if DU reports errors it cannot fix. In that case use Disk Warrior and/or TechTool Pro to repair the hard drive. If neither can repair the drive, then you will have to erase the drive and reinstall from scratch.
    3. Boot from your OS X Installer disc. After the installer loads select your language and click on the Continue button. When you reach the screen to select a destination drive click once on the destination drive then click on the Option button. Select the Archive and Install option. You have an option to preserve users and network preferences. Only select this option if you are sure you have no corrupted files in your user accounts. Otherwise leave this option unchecked. Click on the OK button and continue with the OS X Installation.
    4. Upon completion of the Archive and Install you will have a Previous System Folder in the root directory. You should retain the PSF until you are sure you do not need to manually transfer any items from the PSF to your newly installed system.
    5. After moving any items you want to keep from the PSF you should delete it. You can back it up if you prefer, but you must delete it from the hard drive.
    6. You can now download a Combo Updater directly from Apple's download site to update your new system to the desired version as well as install any security or other updates. You can also do this using Software Update.

  • Looking for best practices when creating DNS reverse zones for DHCP

    Hello,
    We are migrating from ISC DHCP to Microsoft DHCP. We would like the DHCP server to automatically update DNS A and PTR records for computers when they get an IP. The question is, what is the best practice for creating the reverse look up zones in DNS? Here
    is an example:
    10.0.1.0/23
    This would give out IPs from 10.0.1.1-10.0.2.254. So with this in mind, do we then create the following reverse DNS zones?:
    1.0.10.in-addr.arpa AND 2.0.10.in-addr.arpa
    OR do we only create:
    0.10.in-addr.arpa And both 10.0.1 and 10.0.2 addresses will get stuffed into those zones.
    Or is there an even better way that I haven't thought about? Thanks in advance.

    Hi,
    Base on your description, creating two reverse DNS zones 1.0.10.in-addr.arpa and 2.0.10.in-addr.arpa, or creating one reverse DNS zone 0.10.in-addr.arpa, both methods are all right.
    Best Regards,
    Tina

  • Best practice to implement different Xcelsius dashboard for different users

    I'm implementing an Xcelsius dashboard that requires to show each individual user with different content (e.g. When a user logins in, the dashboard shows her name and job title, her performance and her subordinate's performance).  I'm just wondering what's the best practice to implement scenario like this?  Thanks.

    Hi Thomas
    What you are looking at is "Row Level Security" within BusinessObjects and the options you have are determined by what type of data you are reporting off of (relational data, OLAP data, BW data, etc.)
    For instance, if you are using relational data with a Universe you could setup a database table with the BusinessObjects username to correspond with their e-mail address or other unique identifier. From there, you could add security to your universe using the @variable('BOUSER')
    That way, any objects created off of the universe (whether it is a Crystal Report, Web Intelligence, BI Web Service, QaaWS, LiveOffice, etc.) will filter the data based on this security model. So any Xcelsius dashboard based on this underlying data will also be filtered.
    And that is just one of the options you have, depending on your data source.

  • Best practice to Deployment Oracle WebCenter Suite for enterpsie

    I have a lead with enterprise client; and we need to proposed to this client best practice to deploy high availability on cluster environment contains the following components:
    - oracle web center content: it will used for WebCenter portal (spaces) repository for x-trantent portal as well as it will used to build internet website using WCM
    - oracle WebCenter portal; to build x-intranet portal
    - oracle access manager for single sign on authentication
    - oracle web tier for HTTP server and web cache.
    i reviewed the enterprise deployment "http://docs.oracle.com/cd/E23943_01/core.1111/e12037/intro.htm" and contains rich information on the configuration.
    However; my question is could you provide us a best practice to deploy above components on a high availability cluster environment "on a Linux environment prefared" to support and tested around 20k users? By the way client already had oracle exadata 11g server and it will used for this deployment.

    AW,
    One way is creating EJBs.Please refer to the threads below for that
    https://forums.sdn.sap.com/click.jspa?searchID=2936002&messageID=1082087
    You can create a javabean and you can import that as a model .
    Check the following project which will generate javabean (MaX DB)
    https://www.sdn.sap.com/irj/sdn/softwaredownload?download=/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/business_packages/a1-8-4/simple_javabean_generator_project.zip
    This project will generate a javaben out of the tables in MAXDB.
    You can follow any one of the above.
    Regards,Anilkumar

  • Best practice to update inline/publish folio?

    Hi there
    Think all is in my question
    I have an online application with an online folio and I need to update the same folio with a new version.
    What is the best practice to organize my work ?
    DId I have to continue working in Indesign with the same ID  but no update/re publish it in Folio Producer (this option scared me totally... what if mu draft goes online???)
    Did I have to recreate another folio and after testing it, publish it with the same Folio name and description ?? ( not sure it will update the same file as it is not the same)
    What is the best practice to organise me/my work/my file ?
    Thank U

    Hi there
    Think all is in my question
    I have an online application with an online folio and I need to update the same folio with a new version.
    What is the best practice to organize my work ?
    DId I have to continue working in Indesign with the same ID  but no update/re publish it in Folio Producer (this option scared me totally... what if mu draft goes online???)
    Did I have to recreate another folio and after testing it, publish it with the same Folio name and description ?? ( not sure it will update the same file as it is not the same)
    What is the best practice to organise me/my work/my file ?
    Thank U

  • SAP ASE Best Practice latest update

    Hello experts,
    just wondering if somebody already reviewed thoroughly latest guide for best practices on SAP Sybase ASE?
    I am talking about the document from note 1680803 - SYB: Migration to SAP Adaptive Server Enterprise - Best Practice (former note 1722359 - SYB: Running SAP applications on SAP ASE - Best Practice).
    The guide for normal runtime operation was merged with the guide for migration, but there are some contradictory statements.
    Apart from that the study case is again designed for server with huge memory and lot of CPU cores (so not so real case normally, I wonder who setup so often such huge servers...), I have found some inconsistencies.
    E.g. in part "Reconfigure Engines and Parallel Processing", they talk about to limit ASE engines to 16, but the command configures 32.
    alter thread pool syb_default_pool with thread count = 32, idle timeout = 2000
    No change to the previous setup for migration. Is this just typo? I understand it should be 16, and then also number of network tasks for normal operation would be 4 (as mentioned in the beginning of guide that normally you set up 1 per 3-4 engnes). If this is not typo, then number of network tasks is wrong as it should be 8.
    Also they introduced idle timeout, but only talking about ERP and possible lower value for Solman - does this mean that for BW you keep default value (which if I am not mistaken is 100)? As per ADM540 you should even decrease this timeout when SAP system is sharing server with database - I know that document is old, but is again contradictory, not saying that it is wrong, but not well explained.
    If anybody checked new version of guide, please let me know, I think it is bit messed up and is bit difficult to distinguish what you should set up for migration case and what for normal operation case.
    Thanks!
    Regards,
    Matus

    Actually, quite a few customers run with that many engines/memory.   In fact, it is difficult these days to even buy a server with less than 128GB of memory and 16 cores/32 threads.    Pretty much the only time we see less is when the install is in a VM.   Interestingly, we had comments from the first version suggesting the numbers were not realistic given the typical size of systems being deployed were much larger....    In addition, in my experience with customers on SAP systems, they were not aware of how  much memory was necessary to really support medium to large systems based on the configurations they were attempting.
    I am sorry that you feel some of the examples are contradictory.  You are correct in pointing out that the text refers to 16 engines and the example configures 32....   So yes, for that specific example, it should have been 16. 
    Secondly, not having seen ADM540, but I think there is a bit of a problem if they suggest that.   I my opinion (and I have spent a lifetime tuning ASE), the idle timeout for ERP and BW should likely both be 1000+ and 2000 is not unreasonable.   The comment in ADM540 is likely due to if ASE and a NW CI are sharing the same cores - e.g. you have a 4 core box and ASE is running on 2 cores (we will ignore threads for this discussion) and you have 30 NW worker processes - which obviously will need to bump ASE off the cpu in order to run.   This may be fine in a test/dev or even a solution manager system, but bumping ASE off the core is NOT a good thing for a production system.  In fact, I would encourage using numactl or similar to fence off the the cores used for ASE from NW worker processes if at all possible.   We have seen cases of overloaded NW installations with multiple CI instances with hundreds of worker processes each starving cpu away from ASE......sooo....I would tend to actually be a bit more than firm on suggesting that 100 is a very bad starting point.   Given the number of client side joins that SAP uses to avoid [DBMS proprietary] temp tables, it is critical that ASE's (or any DBMS) response time be minimized as much as possible.....having ASE yield the core practically as soon as it gets done processing one task (and puts it to sleep pending an IO) just really causes things to run slow.   Think of a typical query that returns 10 rows - say wide enough that each row fills 1 packet.   If the packet transmit time (and client ACK) takes more than 100 microseconds on CPU (almost a given for network interactions...as clock ticks are in nanoseconds and networking is minimally milliseconds - 1000 microseconds), ASE would yield the CPU every time it sent a packet.    When the client wanted the next packet, the OS would have to wake up the ASE process (an interrupted sleep) which is a nasty heavy weight operation.   Hence it is best for ASE to hang out on the CPU until reasonably sure that nothing more is going to happen very soon....and on current cpus...and having it run for 1-2ms (1000-2000 microseconds) shouldn't be a hardship.     If you created a separate thread pool for batch worker processes, then I could see maybe using a lower idle timeout such as 200 or 250......100 is just plain too low in my mind...it is like saying ASE is expecting an odd query every few seconds vs. a steady workload.  Basically at that level, there had better be a task in the ASE job queue or one on the way on the network already, or that engine is going to sleep.
    While I state that with regards to ADM540 itself, I have not seen the class (perhaps)...one customer did show me the notebook of a class (ASE Sys Admin) they went to and it was really targeted at non-SAP installations more than SAP installations - from a reality/experience aspect.   Part of the issue with the class the customer showed me was it borrowed liberally from the old SY classes as a starting point, but at the point the class was developed there was not a lot of experience with running SAP installations on ASE to really point out the fine tweaking areas such as idle timeout.
    However, the document was really aimed primarily at Business Suite vs. BW systems or a Solution Manager install (which are much smaller) - there are a lot of other considerations for BW the guide doesn't get into - although some of the sizing is a better start than the defaults provided by SAPINST
    The former runtime guide essentially was just merged in to the Post-Migration Steps section.
    May do a quick refresh in the near-future (due to some recent experiences), so if you have other specific examples of the text and SQL not aligning - please let me know.

Maybe you are looking for

  • Memory Leak in my JDBC application.

    Hi I am experiencing a memory leak in a test application using the JDBC-ODBC bridge to access an MS Access DB. I close the result set, statement and connection objects after each query. Even then the memory allocated to the process increases by about

  • A new technique to teach people to be ethical

    Hello All, I have been reading a lot about people point hunters and mongers who spoil the forum...and almost every time I come to CC, i see one or the other one in the pool... So i just thought of this idea....something like a Death Penalty...! The m

  • Premiere Pro on Mac Pro Running Slow

    I just purchased a new 6 Core Mac Pro with 32GB RAM, Dual AMD FirePro D500s, 512GB Solid State Internal Drive and have it connected to a G-RAID 6TB Thunderbolt Drive and Premiere Pro is running pretty darn slow.  There are certainly some things that

  • My ipod wont register on my itunes

    so here's the deal.. i've rebooted my ipod and everytime i plug it in to my computer it goes through the whole scan disk mode and it comes back with a checkmark but my computer wont recognize it or something. i'd really appreciate some help getting i

  • STREAMS ORA-26687: no instantiation SCN provided for...

    Hi guys, I set streams replication on two instance oracle 10.2.0.3 capture and propagation seems working fine but on the apply side I got error message ORA-26687: no instantiation SCN provided for "SCOTT"."EMP" in source database "DB5.REGRESS.RDBMS.D