Cons Unit name in UCMON different than in UCWB

Dear team,
I've a question concerning master data of cons unit . Following issue appears:
I know that cons unit is time dependent:  the old cons unit name was only valid until 31.12.2009. After this date there are several entries with different valid to dates, but medium and also long name never changed afterwards.
Example:
ConsUnit --- Valid from --- Valid to - long name- medium name
3456 ---  01.12.2009 --- 31.12.2009 --- Old LE name long --- Old LE name
3456 --- 01.01.2010 --- 31.03.2010 --- The new name in long --- New name
3456 --- 01.04.2010 --- 30.04.2010 --- The new name in long --- New name
3456 --- 01.05.2010 --- 31.05.2010 --- The new name in long --- New name
3456 --- 01.06.2010 --- 31.12.9999 --- The new name in long --- New name
If I go thru ucwb --> master data --> consolidation units --> 3456, then I see the following entries:
Name
  Medium: New name
  Long: The new name in long
So I do see the latest valid entries in ucwb and I assumed to see the same in UCMON. But if I open ucmon for instance for period 005/2010  I see that the old name is displayed instead of the new name. Curious is that if I click on "filter unit" in ucmon that I then get the corret name in the filter menu - but only there! Unfortunately, I still get the old name in the monitor even though the filter menu shows the new name.
Do you've any suggestion how I can solve this issue or what might be the reason for the different names in UCMON and UCWB?
Thanks in advance for your help!
Best Regards,
Daniel

This sounds like a minor bug. I suggest you search SAP Notes for solution and if none is found notify SAP via customer message.

Similar Messages

  • How can I have a topic name different than the filename of a linked Word document?

    I've encountered multiple issues with linking to Word files ...here's another one: If a Word document is entitled Feature_ABC, then upon generating the HTM after linking it, the HTM's filename and topic name are both Feature_ABC. As underscores are ugly in topic names and features can be renamed, I would hope that I could rename the topic to be different than the filename.
    The Help documentation suggests this is possible by editing the Topic Name Pattern in the Word Conversion Settings dialog of Project Settings. However, reagardless of the Pattern entered, every time I generate or update from the context menu of a linked Word document the topic name remains the same as the filename.
    Because the topic name is shown in the Search panel and browser tabs, it will be a little ugly and may even be confusing. It is possible to edit the HTML directly, but then the HTML is overwritten each time the Word documents are update and the great advantage of using linked documents is lost.
    Is there a way to solve this problem?
    Thanks in advance for any help you might be able to provide.

    Thank you for your quick reply - just checked Multifox out.
    It seems it still forces me to open a new window, while preventing the need for a new profile.
    Why isn't it in the addons.mozilla.org ? Looks dangerous ...

  • SharePoint 2010, Visual Studio 2010, Packaging a solution - The specified path, file name, or both are too long. The fully qualified file name must be less than 260 characters, and the directory name must be less than 248 characters.

    Hi,
    I have a solution that used to contain one SharePoint 2010 project. The project is named along the following lines:
    <Company>.<Product>.SharePoint - let's call it Project1 for future reference. It contains a number of features which have been named according
    to their purpose, some are reasonably long and the paths fairly deep. As far as I am concerned we are using sensible namespaces and these reflect our company policy of "doing things properly".
    I first encountered the following error message when packaging the aforementioned SharePoint project into a wsp:
    "The specified path, file name, or both are too long. The fully qualified file name must be less than 260 characters, and the directory name must be less than 248 characters."
    I went through a great deal of pain in trying to rename the project, shorten feature names and namespaces etc... until I got it working. I then went about gradually
    renaming everything until eventually I had what I started with, and it all worked. So I was none the wiser...not ideal, but I needed to get on and had tight delivery timelines.
    Recently we wanted to add another SharePoint project so that we could move some of our core functinality out into a separate SharePoint solution - e.g. custom workflow
    error logging. So we created another project in Visual Studio called:
    <Company>.<Product>.SharePoint.<Subsystem> - let's call it Project2 for future reference
    And this is when the error has come back and bitten me! The scenario is now as follows:
    1. project1 packages and deploys successfully with long feature names and deep paths.
    2. project2 does not package and has no features in it at all. The project2 name is 13 characters longer than project1
    I am convinced this is a bug with Visual Studio and/or the Package MSBuild target. Why? Let me explain my findings so far:
    1. By doing the following I can get project2 to package
    In Visual Studio 2010 show all files of project2, delete the obj, bin, pkg, pkgobj folders.
    Clean the solution
    Shut down Visual Studio 2010
    Open Visual Studio 2010
    Rebuild the solution
    Package the project2
    et voila the package is generated!
    This demonstrates that the package error message is in fact inaccurate and that it can create the package, it just needs a little help, since Visual Studio seems to
    no longer be hanging onto something.
    Clearly this is fine for a small time project, but try doing this in an environment where we use Continuous Integration, Unit Testing and automatic deployment of SharePoint
    solutions on a Build Server using automated builds.
    2. I have created another project3 which has a ludicrously long name, this packages fine and also has no features contained within it.
    3. I have looked at the length of the path under the pkg folder for project1 and it is large in comparison to the one that is generated for project2, that is when it
    does successfully package using the method outlined in 1. above. This is strange since project1 packages and project2 does not.
    4. If I attempt to add project2 to my command line build using MSBuild then it fails to package and when I then open up Visual Studio and attempt to package project2
    from the Visual Studio UI then it fails with the path too long error message, until I go through the steps outlined in 1. above to get it to package.
    5. DebugView shows nothing useful during the build and packaging of the project.
    6. The error seems to occur in
    CreateSharePointProjectService target called at line 365 of
    Microsoft.VisualStudio.SharePoint.targetsCurrently I am at a loss to work out why this is happening? My next task is to delete
    project2 completely and recreate it and introduce it into my Visual Studio solution.
    Microsoft, can you confirm whether this is a known issue and whether others have encountered this issue? Is it resolved in a hotfix?
    Anybody else, can you confirm whether you have come up with a solution to this issue? When I mean a solution I mean one that does not mean that I have to rename my namespaces,
    project etc... and is actually workable in a meaningful Visual Studio solution.

    Hi
    Yes, I thought I had fixed this my moving my solution from the usual documents  to
    c:\v2010\projectsOverflow\DetailedProjectTimeline
    This builds ok, but when I come to package I get the lovely error:
    Error 2 The specified path, file name, or both are too long. The fully qualified file name must be less than 260 characters, and the directory name must be less than 248 characters. C:\VS2010\ProjectsOverflow\DetailedProjectTimeline\VisualDetailedProjectTimelineWebPart\Features\Feature1\Feature1.feature VisualDetailedProjectTimeline
    Now, the error seems to be related to 
    Can anyone suggest what might be causing this. Probably some path in an XML file somewhere. Here is my prime suspect!
    <metaData>
    <type name="VisualDetailedProjectTimelineWebPart.VisualProjectTimelineWebPart.VisualProjectTimeline, $SharePoint.Project.AssemblyFullName$" />
    <importErrorMessage>$Resources:core,ImportErrorMessage;</importErrorMessage>
    </metaData>
    <data>
    <properties>
    <property name="Title" type="string">VisualProjectTimelineWebPart</property>
    <property name="Description" type="string">My Visual WebPart</property>
    </properties>
    </data>
    </webPart>
    </webParts>
    .... Unless I can solve this I will have to remove the project and recreate but with simple paths. Tho I will be none the wiser if I come across this again.
    Daniel

  • How to list long file names and paths longer than 260 characters in vbscript?

    Hi there,
                   I have looked in different posts and forums, but couldn't find an answer to my question.
    So, here it is... imagine the small piece of code attached below. I need to be able to iterate and rename recursively (I didn't include the recursive part of the code in order to keep things simple) through files and folders in an user selected path. I thought that by using vbscript FileSystemObject I would be able to do so.
    Unfortunately, it seems like the Files method does not detect files whose path is larger than the OS 260 (or 256?) character limit. How could I do it so? Please, don't tell me to use something different than vbscripts :-)
    Thanks in advance!
    Set FSO = CreateObject("Scripting.FileSystemObject")
    Set objFolder = FSO.GetFolder(Path)
    Set colFiles = objFolder.Files
    For Each objFile In colFiles
    Wscript.Echo "File in root folder:" & objfile.Name
    Next

    Yes, that describes the problem, but not the solution - short of changing the names of the folders. There is another solution.  That is, to map a logical drive to the folder so that the path to subfolders and files is shortened.  Here is a procedure that offers a scripted approach that I use to map folders with long names to a drive letter and then opens a console window at that location ...
    Dim sDrive, sSharePath, sShareName, aTemp, cLtr, errnum
    Const sCmd = "%comspec% /k color f0 & title "
    Const bForce = True
    if wsh.arguments.count > 0 then
      sSharePath = wsh.arguments(0)
      aTemp = Split(sSharePath, "\")
      sShareName = aTemp(UBound(aTemp))
      cLtr = "K"
      with CreateObject("Scripting.FileSystemObject")
        Do While .DriveExists(cLtr) and UCase(cLtr) <= "Z"
          cLtr = Chr(ASC(cLtr) + 1)
        Loop
      end with
      sDrive = cLtr & ":"
    Else
      sDrive = "K:"
      sSharePath = "C:\Documents and Settings\tlavedas\My Documents\Script\Testing"
      sShareName = "Testing"
    end if
    with CreateObject("Wscript.Network")
      errnum = MakeShare(sSharePath, sShareName)
      if errnum = 0 or errnum = 22 then
        .MapNetworkDrive sDrive, "\\" & .ComputerName & "\" & sShareName
        CreateObject("Wscript.Shell").Run sCmd & sDrive & sShareName & "& " & sDrive, 1, True
        .RemoveNetworkDrive sDrive, bForce
        if not errnum = 22 then RemoveShare(sShareName)
      else
        wsh.echo "Failed"
      end if
    end with
    function MakeShare(sSharePath, sShareName)
    Const FILE_SHARE = 0
    Const MAX_CONNECT = 1
      with GetObject("winmgmts:{impersonationLevel=impersonate}!\\.\root\cimv2")
        with .Get("Win32_Share")
          MakeShare = .Create(sSharePath, sShareName, FILE_SHARE, MAX_CONNECT, sShareName)
        end with
      end with
    end function
    sub RemoveShare(sShareName)
    Dim cShares, oShare
      with GetObject("winmgmts:{impersonationLevel=impersonate}!\\.\root\cimv2")
        Set cShares = .ExecQuery _
          ("Select * from Win32_Share Where Name = '" & sShareName & "'")
        For Each oShare in cShares
          oShare.Delete
        Next
      end with
    end sub
    The SHARE that must be created for this to work is marked as having only one connect (the calling procedure).  The mapping and share is automatically removed when the console window is closed.  In the case cited in this posting, this approach could be applied using the subroutines provided so that the user's script creates the share/mapping, performs its file operations and then removes the mapping/share.  Ultimately, it might be more useful to change the folder structure, as suggested in the KB article, since working with the created folders and files will still be problematic - unless the drive mapping is in place.Tom Lavedas

  • Problem in starting rmid in a port different than the default one

    hi all,
    i am experiencing exceptions in running rmid in a port different than a default one.
    if i run
    rmiregistry 2000
    rmid -J-Djava.security.policy=policy
    java -Djava.security.policy=policy -Djava.rmi.server.codebase=<mypath> -classpath <myclasspath> com.myclass.ActivatableRMI <host:port>
    everything works fine
    but if i try to specify a port in which rmid should be started, then i have exceptions...
    can anyone tell me how to solve my problem? i am running NT 4.0
    thanx and regards
         marco

    If you start rmid at port other than the default one
    (1098) with -port option, it is necessary to set the
    system property "java.rmi.activation.port" to this
    port.
    Rmid internally starts a local registry service
    (LocateRegistry.createRegistry(port)) and binds the
    RMI activation system server object with this registry
    service like Naming.rebind("//:" + port +
    "/java.rmi.activation.ActivationSystem",
    activationSystem);. Activation system provides a means
    for registering groups and "activatable" objects to be
    activated within those groups. When a activatable
    server object is exported, it is registered with this
    activation system, so it should get the stub to the
    activation system which is started as part of rmid.
    ActivationGroup class provides a static function
    getSystem() to get the reference to the activation
    system stub. Now it makes use of the system property
    "java.rmi.activation.port" value to contact the
    registry service where activation system registered
    itself under the name "//:" + value of
    java.rmi.activation.port +
    "/java.rmi.activation.ActivationSystem". If this
    property is not set, it uses the default port 1098.
    -- Srinath MandalapuThis is an awesome and lovely forum answer (and I saw srinath_babu reply to another similarly in my search), but I would like to ask for just a bit of clarification.
    My question is simply making sure I understand the reply above in my own terms...
    Suppose that right now my program involving activatables is already working with default port 1098, but that I wanted or needed to change to another port for rmid. Then is this statement true?
    - IF the commandline which which I start rmid specifies port xyxyx,
    - AND IF I added -Djava.rmi.activation.port=xyxyx to the commandline with which I run the setup for my Activatable (i.e. the program that calls Activatable.register method),
    - THEN my code will still work as is (i.e. I wouldn't have to change my existing code to specify the port anywhere in it)?
    (If I am not understanding this correctly, please let me know!)
    Thanks very much,
    /Mel

  • [svn:fx-3.x] 12371: The main AccImpl was updated to handle accessible naming conventions differently than before .

    Revision: 12371
    Revision: 12371
    Author:   [email protected]
    Date:     2009-12-02 08:51:12 -0800 (Wed, 02 Dec 2009)
    Log Message:
    The main AccImpl was updated to handle accessible naming conventions differently than before.  This change was made to make the 3.x branch consistent with the trunk changes.  The updated methods allows the accessibilityName property to overwrite the logic that was previously used to build the accessible name for components that were inside of forms.  This method also provides a method to allow developers to specificy that no form heading or form item label should be included in the accessibilityName for the form field by using a space.
    QE notes: none
    Doc notes: none
    Bugs: n/a
    Reviewer: Gordon
    Tests run: checkintests
    Is noteworthy for integration: no
    Modified Paths:
        flex/sdk/branches/3.x/frameworks/projects/framework/src/mx/accessibility/AccImpl.as

    Revision: 12371
    Revision: 12371
    Author:   [email protected]
    Date:     2009-12-02 08:51:12 -0800 (Wed, 02 Dec 2009)
    Log Message:
    The main AccImpl was updated to handle accessible naming conventions differently than before.  This change was made to make the 3.x branch consistent with the trunk changes.  The updated methods allows the accessibilityName property to overwrite the logic that was previously used to build the accessible name for components that were inside of forms.  This method also provides a method to allow developers to specificy that no form heading or form item label should be included in the accessibilityName for the form field by using a space.
    QE notes: none
    Doc notes: none
    Bugs: n/a
    Reviewer: Gordon
    Tests run: checkintests
    Is noteworthy for integration: no
    Modified Paths:
        flex/sdk/branches/3.x/frameworks/projects/framework/src/mx/accessibility/AccImpl.as

  • Set Desktop Photo different than Iphoto version

    When I choose a photo to "set desktop," it appears differently than the one in Iphoto. It is overly saturated.
    I'm using a 22" external monitor when I see this, resolution set at 1680x1050, 60Hz, millions.
    The picture looks good in emails and Preview. I would like to see my "real" photo on the desktop!
    Any ideas?

    Thank you. I tried what you suggested and didn't make any difference. However, when I got my new imac, I didn't migrate the contents of my macbook across immediately and started downloading games, etc onto the imac. I later migrated the contents across and my imac made a new account- my name with the number 1 after it. So I now have two accounts on the imac. I have just logged back into the original account to find a game (the two user accounts are driving me crazy!) and the original picture on the desktop looks great! So it is only fuzzy on the second (main) account. AAAaaaah!

  • Re: Cons Unit Hierachy

    Dear Team,
    I am facing problem while creating Cons Unit Hierarchy.  It through the below mention message
    Hierarchy R1 has more than one root node
        Message no. UGMD304
    Diagnosis
        You defined hierarchy R1 with multiple root nodes. However, the
        hierarchy-defining characteristic only allows one root node per
        hierarchy.
    System Response
        Processing terminated
    Procedure
        Correct the hierarchy definition.
    Can u pls let me now how to overcome from this
    Madhu

    I'm getting a feeling of deja vu here
    Cons Unit Hierarchy

  • Cons Units with diff LC and GC & Currency Translation.

    Hello,
    My question is regarding Currency Tanslation;
    We have some cons units which has different LC and GC (Ex: EUR and USD). These cons units does not require currency  translation in BCS as the group currency value coming from R/3 is already translated. If I do not include these cons units in the task the system is throwing an error in the monitor saying "Currency Trans Task cannnot be executed".
    Is there a way I can configure in the method or task to ignore some cons units from Currency translation even though the local currency is different from group currency.
    Thanks

    Hi Anoop
    Currency translation occur on LC.  Check in the Cons unit master data regarding local currency and check in reported data which values are getting either LC or GC (Ex: EUR and USD)
    Madhu

  • Thumnails different than images

    All of a sudden a few of my thumbnails are different than my acutal image. I rebuild my library, didn't work. By looking at the file name, I can tell the image is correct, but the thumbnail is incorrect. Any other ideas?

    Yea, this *****. I have this disease, as well.
    There's another thread on it somewhere. The best advice was to export the project and re-import. That assumes you know which ones are bad! I have nearly 30,000 images -- how the heck am I supposed to find all the bad thumbs, export, and re-import?

  • ESSO PG on something different than MS AD

    Is it possible to make PG (from 11.1.1.5.1 ESSO distro) functional on something different than MS AD, like ADAM? I deployed the ADAM, LM and PG (both client and server). LM works successfully and (after first sync) created ADAM nodes (of class vGOUserData) in OU=People,dc=domain,dc=local container. Keeping in mind that (in MS AD case) for PG to add user logon I should first manually create user and only after that use "Add New SSO User" link. So, I manually created two semi-empty nodes of class vGOUserData too ("semi-" - due to lack of data to put in very special attributes, including encrypted ones). After selecting "Manage SSO Users" - "Find User(s)" PG successfully shows me all created users (both from LM and manually created). But when I try to add new logons (no difference, for "LM" or "manual" user entries) PG shows me error message saying that "System.DirectoryServices.ActiveDirectory.ActiveDirectoryObjectNotFoundException: The local computer is not joined to a domain or the domain cannot be contacted". What's wrong with it (ADAM & PG are deployed on standalone Win2K3 Server)?

    Update: seems that ESSO PG simply doesn't works on LDAP servers other than MS AD. It doesn't works at all. I tested it on ADAM and DSEE. Problems with ADAM are difficult to research, because traffic is protected via GSS-API, so I simply can't sniff network traffic as it encrypted. But as for DSEE, problems are described below:
    Documentation (Oracle® Enterprise Single Sign-on Logon Manager Best Practices: Deploying ESSO-LM with an LDAP Directory Release 11.1.1.5.0 E21003-01) says about locator object: +"You must create the vGOLocator object at least at the same level as the container that holds user accounts. Ideally, vGOLocator should exist inside the directory’s user accounts container"+. So I created LDAP structure as shown below:
    (+)-dc=domain,dc=local (Root context)
    |---(+)-ou=Users
    |---------(+)-ou=vgoLocator
    |---(+)-ou=ESSO
    |---------(+)-ou=People
    |---------(+)-ou=SSOConfig
    As you can see, structure looks quite correct: vgoLocator "ideally exist inside the directory’s user accounts container", right?
    Than I setup application templates, filled LM settings (via LM Administrative Console), wrote them into registry hive and published all those settings into LDAP (DSEE). It works correctly and LDAP structure now looks like that:
    (+)-dc=domain,dc=local (Root context)
    |---(+)-ou=Users
    |---------(+)-uid=ESSO.Administrator (created manually)
    |---------(+)-ou=vgoLocator
    |---(+)-ou=ESSO
    |---------(+)-ou=People
    |--------------(+)-cn=ESSO.Administrator (created by LM while performing LDAP sync)
    |---------(+)-ou=SSOConfig
    |--------------(+)-ou=2c58e3b2-320a-4ed2-a708-90065620e5d1 (created by LM admin console while publishing settings into LDAP)
    OK, now LM works perfectly, but we want to create centralized system, right? So I had to deploy PG. It was deployed and started. The first task was to correctly fill storage settings. I used values as shown below:
    Storage Type: Oracle Directory Server Enterprise Edition
    Server: esso.hq.domain.local:1389
    Root DN: dc=domain,dc=local
    User Path: ou=Users,dc=domain,dc=local
    Connect as User: cn=Directory Manager
    Password: **********
    Use secure connection (SSL): no
    Use configuration objects instead of application list: yes
    Role/Group support: no
    Configuration and role/group objects root DN: ou=SSOConfig,ou=ESSO,dc=domain,dc=local
    Admin Group DN:
    User PrePend: UID
    Config looks OK, isn't it?
    But when I tried so save settings, I got error "Cannot find any configuration objects". Metalink shows me two articles on this problem: 1305060.1 (says that i must not use space in Configuration and role/group objects root DN). Can you see any spaces in "ou=SSOConfig,ou=ESSO,dc=domain,dc=local" string? Next article was 1298762.1 that says that "This is because the container you have listed in the Storage Setting titled 'Configuration and role/group objects root DN:' does not contain any ESSO objects such as templates or password policies". But that container really contains ESSO objects (three, to be more precise), so that wasn't the solution too.
    OK, let's look in depth... I installed wireshark and captured network traffic from PG to LDAP. There was four LDAP-related calls:
    1. PG searches object ou=vgoLocator,dc=domain,dc=local. LDAP says "no such object" (of course, because we read documentation and created locator in "ideal" container - inside Users container). WHY? WHY it tries to find it there???
    2. PG searches LDAP for dc=domain,dc=local. LDAP says "OK"
    3. PG searches LDAP for ou=Users,dc=domain,dc=local. LDAP says "OK"
    4. PG searches LDAP for ou=SSOConfig,ou=ESSO,dc=domain,dc=local. LDAP says "OK"
    That's all. No more LDAP requests, PG simply throws an error as shown above.
    "OK" - said me to myself, let's create locator at the position, where PG expects it to find, i.e. at ou=vgoLocator,dc=domain,dc=local. Wow! PG now accepts settings entered earlier and I almost become happy. LDAP now looks like that:
    (+)-dc=domain,dc=local (Root context)
    |---(+)-ou=Users
    |---------(+)-uid=ESSO.Administrator
    |---(+)-ou=ESSO
    |---------(+)-ou=People
    |--------------(+)-cn=ESSO.Administrator
    |---------(+)-ou=SSOConfig
    |--------------(+)-ou=2c58e3b2-320a-4ed2-a708-90065620e5d1
    |---(+)-ou=vgoLocator
    So, let's play with users, yeah? Let's click on "Manage SSO Users" and have some fun? But PG tells me "Invalid storage settings" error... WTF??? OK, let's start network sniffer again and look, what's happened. Oh-oh... Wireshark shows me that PG tries to logon onto LDAP using credentials "UID=ESSOPG\SVC_SSO,OU=USERS,DC=DOMAIN,DC=LOCAL" and LDAP declines authentication requests. BTW, ESSOPG\SVC_SSO is a local user, used (according to documentation, of course (looks like it's now my favorite phrase)) by IIS as anonymous user. WHY??? Why PG uses that user instead of "cn=Directory Manager", that I entered into storage settings five minutes earlier???
    OK, if PG wants that user, let's create him in LDAP, yes? So I created such an user in LDAP and now it looks like that:
    (+)-dc=domain,dc=local (Root context)
    |---(+)-ou=Users
    |---------(+)-uid=ESSO.Administrator
    |---------(+)-uid=ESSOPG\SVC_SSO
    |---(+)-ou=ESSO
    |---------(+)-ou=People
    |--------------(+)-cn=ESSO.Administrator
    |---------(+)-ou=SSOConfig
    |--------------(+)-ou=2c58e3b2-320a-4ed2-a708-90065620e5d1
    |---(+)-ou=vgoLocator
    Test with LDAP browser shows that I can access LDAP using that newly created user. Just one thing: according to LDAP rules, I have to use double slashes in logon name: "uid=ESSOPG\ \SVC_SSO,OU=USERS,DC=DOMAIN,DC=LOCAL". But PG doesn't know anything about such a rule and simply passes "uid=ESSOPG\SVC_SSO,OU=USERS,DC=DOMAIN,DC=LOCAL" (with one slash) to LDAP. LDAP declines auth request. What to do now? Eureka! Let's enter two slashes directly into PG settings! So, let's go to "Web Service Account" page and add one more slash into "User Name" field. Now it contains value "essopg\ \svc_SSO". Cursor moves to "Save" button, clicks on it....
    PG shows me an error message: "Please enter the User Name in format "domain\userid".".....
    I surrender. I have more than fifteen years experience in software development and information security, but I can't setup such a "simple" solution...
    P.S. I asked local Oracle representative: "have you ever tried to deploy PG on non-MSAD LDAP?". He answered "NO, because it's a very common way - to use existing MS AD to store outside data". Very strange opininon:
    1. MS AD are owned and supported by IT department
    2. IT security projects, like E-SSO are pushed into company by Security department
    3. 1 + 2 means that "The less impact the new solution will make on existing IT infrastructure (BTW, MS AD schema extension and storing outside data in it is a VERY big impact) - the easier the implementation process will be too". It's sad that Oracle doesn't understand such a simple thing
    P.P.S. He also told me that "if something doesn't works, I should create SR". But there is some problem: in order to customer to buy ESSO and support (required to create SR's) I have to persuade him to use such a solution, right? Solution, that currently doesn't works? So, there's the only way remains: I have to buy SR's and create request on my behalf. But what must such an SR include? Phrase "Product doesn't works at all"? Oracle tries to get money several times for single work (once from me, because I have to buy SR in order to make Oracle's product work, and than from customer, who will buy the product itself)? Right?
    P.P.P.S. Forum engine shows two slashes as line feed, so I had to insert space between them in order to show them correctly

  • Are /usr/ubc binaries different than /usr/bin binaries?

    Hi,
    I am running SunOS 5.8 and I have following questions about solaris distribution:
    1. Are /usr/ucb binaries different than regular binaries like /usr/bin etc...
    Because at file level they appear to be the same:
    e.g. ps
    $ ls -ltr /usr/ucb/ps /usr/bin/ps
    -r-xr-xr-x 38 root bin 5.2K Jan 5 2000 /usr/ucb/ps
    -r-xr-xr-x 38 root bin 5.2K Jan 5 2000 /usr/bin/ps
    2. Are /usr/ubc binaries are part of standared solaris distribution?
    Thanks

    Hi,
    I am running SunOS 5.8 on SPARC I assume...
    and I have following
    questions about solaris distribution:
    1. Are /usr/ucb binaries different than regular
    binaries like /usr/bin etc...Yup (at least most of the time). /usr/ucb/ gives some behavior of older BSD systems for compatibility.
    Because at file level they appear to be the same:
    e.g. ps
    $ ls -ltr /usr/ucb/ps /usr/bin/ps
    -r-xr-xr-x 38 root bin 5.2K Jan 5 2000 /usr/ucb/ps
    -r-xr-xr-x 38 root bin 5.2K Jan 5 2000 /usr/bin/psThis is a side effect of a feature called 'isaexec'. You'll notice the link count is 38, so this file is linked as 38 different names.
    The 'ps' command looks through /proc, so the binary has to match the currently running kernel architecture (whether 32-bit or 64-bit). 'isaexec' allows you to have multiple binaries that are called properly based on architecture.
    On my machine:
    # ls -litr /usr/ucb/ps /usr/bin/ps /usr/lib/isaexec
    18227 -r-xr-xr-x 37 root bin 5256 Jan 5 2000 /usr/ucb/ps
    18227 -r-xr-xr-x 37 root bin 5256 Jan 5 2000 /usr/lib/isaexec
    18227 -r-xr-xr-x 37 root bin 5256 Jan 5 2000 /usr/bin/ps
    So all 'isaexec' does is look for a binary with the same name as it was called in the architecture specific directories. So for /usr/ucb/ps, it's these:
    # ls -l /usr/ucb/sparcv7/ps /usr/ucb/sparcv9/ps
    -r-xr-xr-x 1 root sys 23504 Jul 15 2005 /usr/ucb/sparcv7/ps
    -r-xr-xr-x 1 root sys 32264 Jul 15 2005 /usr/ucb/sparcv9/ps
    And for /usr/bin/ps It's these:
    # ls -l /usr/bin/sparcv7/ps /usr/bin/sparcv9/ps
    -r-sr-xr-x 1 root sys 29196 Jul 15 2005 /usr/bin/sparcv7/ps
    -r-sr-xr-x 1 root sys 38536 Jul 15 2005 /usr/bin/sparcv9/ps
    So different binaries (and different architectures):
    # file /usr/bin/sparcv7/ps /usr/bin/sparcv9/ps /usr/ucb/sparcv7/ps /usr/ucb/sparcv9/ps
    /usr/bin/sparcv7/ps: ELF 32-bit MSB executable SPARC Version 1, dynamically linked, stripped
    /usr/bin/sparcv9/ps: ELF 64-bit MSB executable SPARCV9 Version 1, dynamically linked, stripped
    /usr/ucb/sparcv7/ps: ELF 32-bit MSB executable SPARC Version 1, dynamically linked, stripped
    And you can use 'truss' to see how this works:
    /usr/ucb/sparcv9/ps: ELF 64-bit MSB executable SPARCV9 Version 1, dynamically linked, stripped
    Note that this feature (isaexec) has nothing specifically to do with /usr/ucb. It's used by any system binary that needs separate versions for 32-bit vs 64-bit to run properly. (truss, p*, uptime, ...)
    2. Are /usr/ubc binaries are part of standared
    solaris distribution?There's only one "Solaris" distribution, so yes. I believe they get installed at the 'user' installation level and above, but you'd have to check on that.
    Darren

  • &AM_DEFINITION_NAME different than &DEFINITION_NAME

    Hi Im trying to subsitute an Application Module that's not a root AM and I'm getting the error below:
    The Application Module found by the usage name SRAM has a definition name &AM_DEFINITION_NAME different than &DEFINITION_NAME specified for the region. I'm doing this in JDeveloper.
    I saw two similar posts in this forum but no solution has been provided.
    Has anyone run into this error before ?

    Hi ! Yes I did set the property -Djbo.project=project name. I don't know if this should be a problem but I have two projects in my JDeveloper.A page in the first project is calling a second page which is on the other project. I had set the subsitutions in both projects... Could it be a conflict ? I will try removing the substitution from the other....
    Otherwise, you guys have any other ideas ?

  • Mvmt Type 122 value different than Mvmt Type 101 value

    Is it standard in SAP for the $ unit value of a 122 tx to be different than the $ unit value of a 101 tx?  Can the parameters in SAP be changed for 122 tx to debit GRIR at PO price so that the 101 tx and 122 tx clear?  It seems the 122 tx is values at invoice price and not PO price?

    Dear,
    If 122 movement value is different from 101 -
    1. Check on what basis system picking the value for 122 movement. From Moving average price?
    2. Let's know you are doing 122 reference to PO or original 101 document number?
    Meanwhile, try with MB01 reference to PO number - 122 movement.
    Regards,
    Syed Hussain.

  • Cons unit change at year end

    Hi : )
    when performing a cons unit change at the end of the year whereby the old cons group becomes unnecessary as no cons unit is part of it in period 1/2011, do I have to delete the cons group as well from the new hierarchy starting on the 1.01.2011 or do I just delete the cons units out of this old cons group?
    Also, as we will have a new cons group and cons unit hierarchy starting from the 1.01.2011 will we not be having a problem displaying period 12/2010 data come January as the hierachy then used in the queries will only have the new structure, i.e. the moved cons units sitting under the new cons group??? Our queries are currently set up to select the hierachy using the default query date...
    Your views would be very much appreciated.
    Kind regards,
    Tanja

    Thanks Dan : )
    The configuration document that I received mentions to deleting the cons unit in the 1st period following the consolidation group change. So because I am divesting the cons unit in period 12/2010 I interpreted that I can delete the cons unit from the old cons group in period 1/2011 including the old cons group as there are no more cons units within it.
    But this will be causing us problems in January for period 12/2010 reporting as the new hierarchy starting on 1/1/2011 that is picked by the queries due to default query date will not be have the old cons group.
    So I think by what you said to leave the cons unit and cons group until the year over year reporting is finished will solve our problem.
    But here is the example anyway. Maybe there is a different way:
    Cons unit A is the only cons unit under Cons Group A and is divested as at 12/2010
    Cons unit A is acquired into a new never before existed Cons Group C as at 12/2010
    So the currently valid hierachy is valid from 01.01.2010 - 31.12.2010 with Cons unit A now residing in 2 places
    In period 1/2011 I deleted Cons unit A under Cons Group A and also deleted cons Group A.
    This created a new hierachy that is valid from 01.01.2011- 31.12.9999
    So in I am correct in assuming that if our queries use default query date that we will have a problem trying to report on period 12/2010 data in January?
    Kind regards,
    Tanja

Maybe you are looking for