Issue in creating Duet  Demand Planning Sheet on the basis of Dataview

Hi all,
We are using following landscape
Landscape Details-
 SAPAPO = SCM 700 0002 - SAPKY7002
 Duet 1.5 - 150.700 SP3
 Duet Demand Planning u2013 1000.150.700.3.0 (Server Component), 1000.150.700.3.0 (Client Component)
There are 2 ways available to create Duet Demand Planning Sheet
1. On the basis of existing Demand Planning Scenario
2. On the basis of existing Data views( with SAP macros)
We are able to create Duet planning Sheet with 1 option. But we got erro with second Option that is on the basis od Data view(with macros).
Planning area, Planning Book/Data view and Selection Ids are existing and active.All the Webservices given by SAP are released.
Duet is able to retrive the list of Dataviews avilable for the Planning Area. But it is failing to retrieve the Data View Details the Error is as follow
"No view Details were found Please choose another dataview."
Web Service Call (PlanningViewDetails):
java.rmi.RemoteException:Service call exception; nested exception is:
java.rmi.UnmarshalException: Attribute schemeAgencyID is required in element ID !
Data: TYPEID DUET_JAVA_ADDON
Data: SEVERITYCODE 2
Exception of type 'System.Exception' was thrown.
Is there any SAPNOTE for this ?Any specific configuration requirement from SAPAPO side?
Any webservices to be applied?
Please help in this Issue
Thanks &  Regards
Rahul

Hi
There are couple of OSS  notes, you might have already seen:
1371328 Internal Error using Planning Book with Aux. KF
1431661 Duet Demand Planning 1.5 SP4.
Regards
Datta

Similar Messages

  • What are the possible live cache issues turn up for demand planning?

    HI all,
    Can anyone tell me the issues that may arise iin live cache and how to ensure the performance and maintenace of live cache that a planning area is utilizing it.
    if the utilization is high how to bring it down!!
    Can anyone pls guide me on this!!!
    Thanks
    Pooja

    Hi Pooja,
    1) Accumulation of logs created during demand planning jobs will have an impact on performance
    and affect livecache and hence to be cleared periodically
    2) Livecache consistency check to be performance at planned intervals
    which will reduce livecache issues
    3) Through om13 transaction, you can analyse the livecache and LCA objects
    and act accordingly
    4) You can also carry out /SAPAPO/PSTRUCONS - Consistency Check for Planning Object
    Structure and /SAPAPO/TSCONS - Consistency Check for Time Series Network related to
    demand planning to avoid livecache issues later
    Regards
    R. Senthil Mareeswaran.

  • Handling Planned order created from Demand planning and Sales order in MTO

    Hi all
    In MTO\ Repetitive we genrate Dependent requirement from demand planning (PIR) after running MRP which finally planned order created but on the other hand in short term we have Sales order which generate planned order after running MRP(In MTO), How we should handle planned order generated from Demand planning and the ones generated from sales order.(both after MRP Running).
    Regards
    Babak Bolourchi

    Dear,
    You can combine the planning strategy also main as 56 and second as 65 and in demand managment MD61 you have the option select requirement type.
    Then while creating sales order use MTO strategy and take MRP run through MD50 and then MD02.
    and if you want to assign the stock to sales order then you and do it through MB1b with 412 E movement type.
    As per my knowledge, generally 56 strategy only takes Sales Order Stock into consideration.
    Consider setting the Individual/Collective indicator to 2.
    Mark those characteristics that should have a usage probability in demand management as Relevant for planning.
    Hope clear to you.
    Regards,
    R.Brahmankar

  • Can i access the Oracle Demand Planning Cubes using the AWM?

    Hi All,
    Is it possible to access the Oracle Demand planning data which is stored as Express server using the Analytic workspace manager?
    Any information regarding this is really appreciated.

    Abdul Hafeez,
    Is ODP using Express Server or Oracle OLAP Analytical workspaces?
    If Express Server is being used by ODP, then you cannot create sql views for OBIEE.
    If Oracle database is being used for ODP data, then it is storing the data in Analytical workspaces. In that case, you can manually create OLAP views using OLAP_TABLE function. To do that, you will first have to know all the required structures inside the Analytical workspaces. Without that knowledge, you will not know what olap structures to "expose" in the OLAP views.
    You can read about OLAP_TABLE function from Oracle 10g documentation http://download.oracle.com/docs/cd/B19306_01/olap.102/b14350/olap_table.htm
    If you are using 9i database for ODP, then read about OLAP_TABLE function from 9i documentation http://download.oracle.com/docs/cd/B10501_01/olap.920/a95295/olap_tab.htm
    - Nasar

  • Issue with LCM while migrating planning application in the cluster Env.

    Hi,
    Having issues with LCM while migrating the planning application in the cluster Env. In LCM we get below error and the application is up and running. Please let me know if anyone else has faced the same issue before in cluster environment. We have done migration using LCM on the single server and it works fine. It just that the cluster environment is an issue.
    Error on Shared Service screen:
    Post execution failed for - WebPlugin.importArtifacts.doImport. Unable to connect to "ApplicationName", ensure that the application is up and running.
    Error on network:
    “java.net.SocketTimeoutException: Read timed out”
    ERROR - Zip error. The exception is -
    java.net.SocketException: Connection reset by peer: socket write error
    at java.net.SocketOutputStream.socketWrite0(Native Method)
    at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:92)
    at java.net.SocketOutputStream.write(SocketOutputStream.java:136)
    at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65)

    Hi,
    First of all, if your environment for source and target are same then you will have all the users and groups in shared services, in that case you just have to provision the users for this new application so that your security will get migrated when you migrate the from the source application. If the environs are different, then you have to migrate the users and groups first and provision them before importing the security using LCM.
    Coming back to the process of importing the artifacts in the target application using LCM, you have to place the migrated file in the @admin native directory in Oracle/Middleware/epmsystem1.
    Open shared services console->File system and you will see the your file name under that.
    Select the file and you will see all your exported artifacts. Select all if you want to do complete migration to target.
    Follow the steps, select the target application to which you want to migrate and execute migration.
    Open the application and you will see all your artifacts migrated to the target.
    If you face any error during migration it will be seen in the migration report..
    Thanks,
    Sourabh

  • Issue in creating a AP Invoice without copying the AP Downpayment

    Dear All,
                 I create a Purchase order, using this PO i have created a A/P Down Paymnet Invoice, then created outgoing paymnet to the Invoice, then i do GRN the Landed Cost, and the AP Invoice.
                 Problem is the user did not pull in the down payment Invoice into the AP Invoice, it is now showing me balances for the BP where as there should be nill balance.
                 Even in BP Recouncillation only AP Invoice is visible.
                 BP is Multi currency

    Hi,
    You cannot have the same PO link,after creating cancellation document(AP credit memo) for AP invoice.
    Create a new AP invoice considering the downpayment.This will adjust the vendor balance.
    You cannot book against the GRPO too as once the GRPO is closed,the status can not be changed to open.
    So you have to follow the above alternative,for auditing purpose you can have the PO and GRPO details as remarks.

  • How to create a backout plan before upgrate the Window Server?

    Hi,
    I am going to upgrade Server 2008 R2 to 2012 R2, before to do that, I would like to create the backout plan for it. please can you tell what content can I put on the backout plan? can you give me an example?
    Thx

    Hi,
    Backup before upgrading should include all the data and configuration information that is necessary for the computer to function. It is important to perform a backup of configuration information for servers, especially those that provide a network infrastructure,
    such as DHCP servers. When you perform the backup, be sure to include the boot and system partitions and the system state data. Another way to back up configuration information is to create a backup set for Automated System Recovery.
    Detailed backup steps depends on the role and function of your server. Certain server roles that are already installed might require additional preparation or actions.
    Here are documents and just for your reference:
    Upgrade Domain Controllers to Windows Server 2012 R2 and Windows Server 2012
    https://technet.microsoft.com/en-us/library/hh994618.aspx?f=255&MSPPError=-2147217396#BKMK_UpgradePaths
    Performing an in place upgrade of Server 2008 R2 to Server 2012 R2
    http://blogs.technet.com/b/chrisavis/archive/2013/10/01/performing-an-in-place-upgrade-of-server-2008-r2-to-server-2012-r2.aspx
    Best Regards,
    Eve Wang
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

  • Issue with creating a G/L account using the BAPI "BAPI_ACC_DOCUMENT_POST"

    Hi All,
    I am trying to create a G/L account (FB50) using the BAPI "BAPI_ACC_DOCUMENT_POST". Can somebody help in populating values to the following parameters :
    1) OBJ_TYPE
    2) OBJ_KEY
    3) BUS_ACT
    I tried passing BKPF & BKPFF to the Object type but i am getting error saying that "Incorrect Entry".
    Please let me where to find the values for the fields.
    Any Help is much appreciated.
    Thanks in Advance.

    Hi Ram,
    Thanks for your input. The BAPI is working fine now, but one small change, i am passing REACI for the object type instead of BKPFF.
    Here's the values that i am passing to the BAPI.
    OBJ_TYPE                       REACI
    OBJ_KEY                        TEST
    OBJ_SYS                        ECSCLNT010
    BUS_ACT                        RFBU
    USERNAME                       KKUMAR
    HEADER_TXT                     TEST_BAPI
    Thanks Once agian.

  • How can I create & print a whole sheet of the same label?

    For example, a full page of return address labels, or "Thank You" on every label? Someone posted a similar question in '07 but there was no solution posted at that time. Does anyone know how to do this in Mail and Address? or even in Pages? Thanks.

    Thank you for your input. I am trying to stay attentive to the things that Mac can do to better, or more so, than the PC. It hard to stay focused on that when the basics (things that I do regularly) seem to be just not even offered...like today, I wanted to get a "receipt" with an email I sent out from Mail and Address Book. However, as you probably know, it isn't a supported function. Again I noticed in the discussions that many folks are disappointed and questioning why something that has been around for so long in other mail software is not a part of Mac's. As far as using Word, I see your point, but I did pay for iWorks because the Apple Store guy told me I can do all the same things and even more than Word. I would have just paid for Word had I known, but now I'm not paying for both. I found avery.com and will use their free online label maker for now till I figure out another way to do it without having to incur another fee. So far, iWorks and Mail and Address not been impressive to me. Hopefully I will keep uncovering aspects of them that make them shine and impress me, and make life easier when on the computer. Thanks. Peace.

  • Mkarchroot fails to create chroot cleanly if I omit the base group.

    Following the instruction on this wiki page consistently ends in errors creating a clean chroot and mounted links.  For example:
    % mkdir /mnt/data/test
    % sudo mkarchroot /mnt/data/test base-devel
    ==> Creating install root at /mnt/data/test/root
    ==> Installing packages to /mnt/data/test/root
    :: Synchronizing package databases...
    router 4.8 KiB 0.00B/s 00:00 [############################################] 100%
    core 103.4 KiB 771K/s 00:00 [############################################] 100%
    extra 1447.9 KiB 1477K/s 00:01 [############################################] 100%
    community 2019.1 KiB 1754K/s 00:01 [############################################] 100%
    :: There are 25 members in group base-devel:
    :: Repository core
    1) autoconf 2) automake 3) binutils 4) bison 5) fakeroot 6) file 7) findutils 8) flex 9) gawk 10) gcc 11) gettext
    12) grep 13) groff 14) gzip 15) libtool 16) m4 17) make 18) pacman 19) patch 20) pkg-config 21) sed 22) sudo
    23) texinfo 24) util-linux 25) which
    Enter a selection (default=all):
    resolving dependencies...
    looking for inter-conflicts...
    Packages (84): acl-2.2.52-1 archlinux-keyring-20130525-2 attr-2.4.47-1 bash-4.2.045-4 bzip2-1.0.6-4
    ca-certificates-20130610-1 cloog-0.18.0-2 coreutils-8.21-2 cracklib-2.9.0-1 curl-7.32.0-1 db-5.3.21-1
    diffutils-3.3-1 dirmngr-1.1.1-1 e2fsprogs-1.42.8-1 expat-2.1.0-2 filesystem-2013.05-2 gcc-libs-4.8.1-3
    gdbm-1.10-1 glib2-2.36.4-1 glibc-2.18-2 gmp-5.1.2-1 gnupg-2.0.20-2 gpgme-1.4.2-2 iana-etc-2.30-3
    isl-0.12-1 less-458-1 libarchive-3.1.2-1 libassuan-2.1.1-1 libcap-2.22-5 libffi-3.0.13-3 libgcrypt-1.5.3-1
    libgpg-error-1.12-1 libgssglue-0.4-1 libksba-1.3.0-1 libldap-2.4.35-4 libltdl-2.4.2-10 libmpc-1.0.1-1
    libsasl-2.1.26-4 libssh2-1.4.3-1 libtirpc-0.2.3-1 linux-api-headers-3.10.6-1 lzo2-2.06-1 mpfr-3.1.2-1
    ncurses-5.9-5 openssl-1.0.1.e-3 pacman-mirrorlist-20130626-1 pam-1.1.6-4 pambase-20130113-1 pcre-8.33-1
    perl-5.18.0-1 pinentry-0.8.3-1 pth-2.0.7-4 readline-6.2.004-1 run-parts-4.4-1 shadow-4.1.5.1-6 tar-1.26-4
    tzdata-2013d-1 xz-5.0.5-1 zlib-1.2.8-1 autoconf-2.69-1 automake-1.14-1 binutils-2.23.2-3 bison-3.0-1
    fakeroot-1.19-1 file-5.14-1 findutils-4.4.2-5 flex-2.5.37-1 gawk-4.1.0-1 gcc-4.8.1-3 gettext-0.18.3-1
    grep-2.14-2 groff-1.22.2-3 gzip-1.6-1 libtool-2.4.2-10 m4-1.4.16-3 make-3.82-6 pacman-4.1.2-1
    patch-2.7.1-2 pkg-config-0.28-1 sed-4.2.2-3 sudo-1.8.7-1 texinfo-5.1-1 util-linux-2.23.2-1 which-2.20-6
    Total Installed Size: 369.38 MiB
    :: Proceed with installation? [Y/n]
    (84/84) checking keys in keyring [############################################] 100%
    (84/84) checking package integrity [############################################] 100%
    (83/84) installing texinfo [############################################] 100%
    (84/84) installing which [############################################] 100%
    umount: /mnt/data/test/root/dev: target is busy.
    (In some cases useful info about processes that use
    the device is found by lsof(8) or fuser(1))
    umount: /mnt/data/test/root: target is busy.
    (In some cases useful info about processes that use
    the device is found by lsof(8) or fuser(1))
    mknod(/mnt/data/test/root) failed: File exists
    mknod(/mnt/data/test/root) failed: File exists
    mknod(/mnt/data/test/root) failed: File exists
    mknod(/mnt/data/test/root) failed: File exists
    mknod(/mnt/data/test/root) failed: File exists
    mknod(/mnt/data/test/root) failed: File exists
    sudo mkarchroot /mnt/data/test/root base-devel 10.04s user 0.80s system 68% cpu 15.906 total
    And...
    % df -h | grep test
    udev 7.7G 0 7.7G 0% /mnt/data/test/root/dev
    udev 7.7G 0 7.7G 0% /mnt/data/test/root/dev
    If I run the very same line but install both base and base-devel I get the expected results with no errors or mounted links.  Is this bad advice on the wiki or a genuine error with the script?

    Happens under bash or zsh... and wtf
    % sudo lsof /mnt/data/chroot64/root/dev
    COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
    systemd 1 root 0u CHR 1,3 0t0 1028 /dev/null
    systemd 1 root 1u CHR 1,3 0t0 1028 /dev/null
    systemd 1 root 2u CHR 1,3 0t0 1028 /dev/null
    systemd 1 root 3w CHR 1,11 0t0 1034 /dev/kmsg
    systemd 1 root 13w CHR 10,130 0t0 12967 /dev/watchdog
    systemd 1 root 24u FIFO 0,5 0t0 5640 /dev/initctl
    systemd 1 root 32r CHR 10,235 0t0 1105 /dev/autofs
    kdevtmpfs 48 root cwd DIR 0,5 3320 1025 /
    kdevtmpfs 48 root rtd DIR 0,5 3320 1025 /
    systemd-j 187 root 0r CHR 1,3 0t0 1028 /dev/null
    systemd-j 187 root 1w CHR 1,3 0t0 1028 /dev/null
    systemd-j 187 root 2w CHR 1,3 0t0 1028 /dev/null
    systemd-j 187 root 6w CHR 1,11 0t0 1034 /dev/kmsg
    systemd-j 187 root 8u CHR 1,11 0t0 1034 /dev/kmsg
    systemd-u 200 root 0u CHR 1,3 0t0 1028 /dev/null
    systemd-u 200 root 1u CHR 1,3 0t0 1028 /dev/null
    systemd-u 200 root 2u CHR 1,3 0t0 1028 /dev/null
    crond 505 root 0r CHR 1,3 0t0 1028 /dev/null
    fancontro 506 root 0r CHR 1,3 0t0 1028 /dev/null
    dbus-daem 509 dbus 0r CHR 1,3 0t0 1028 /dev/null
    systemd-l 510 root 0r CHR 1,3 0t0 1028 /dev/null
    systemd-l 510 root 12u CHR 13,65 0t0 9441 /dev/input/event1
    systemd-l 510 root 13u CHR 13,66 0t0 9442 /dev/input/event2
    systemd-l 510 root 14u CHR 13,64 0t0 9440 /dev/input/event0
    systemd-l 510 root 15u CHR 4,6 0t0 1047 /dev/tty6
    gpm 515 root 0u CHR 1,3 0t0 1028 /dev/null
    gpm 515 root 1u CHR 1,3 0t0 1028 /dev/null
    gpm 515 root 2u CHR 1,3 0t0 1028 /dev/null
    agetty 516 root 0u CHR 4,1 0t0 1042 /dev/tty1
    agetty 516 root 1u CHR 4,1 0t0 1042 /dev/tty1
    agetty 516 root 2u CHR 4,1 0t0 1042 /dev/tty1
    lxdm-bina 517 root 0r CHR 1,3 0t0 1028 /dev/null
    X 561 root mem CHR 226,0 5340 /dev/dri/card0
    X 561 root 11u CHR 4,7 0t0 1048 /dev/tty7
    X 561 root 12u CHR 226,0 0t0 5340 /dev/dri/card0
    X 561 root 13u CHR 10,63 0t0 1026 /dev/vga_arbiter
    X 561 root 15u CHR 13,65 0t0 9441 /dev/input/event1
    X 561 root 16u CHR 13,66 0t0 9442 /dev/input/event2
    X 561 root 17u CHR 13,64 0t0 9440 /dev/input/event0
    X 561 root 18u CHR 13,67 0t0 9443 /dev/input/event3
    X 561 root 19u CHR 13,68 0t0 9444 /dev/input/event4
    X 561 root 20u CHR 13,70 0t0 5759 /dev/input/event6
    X 561 root 21u CHR 13,71 0t0 5785 /dev/input/event7
    ntpd 872 ntp 0u CHR 1,3 0t0 1028 /dev/null
    ntpd 872 ntp 1u CHR 1,3 0t0 1028 /dev/null
    ntpd 872 ntp 2u CHR 1,3 0t0 1028 /dev/null
    gnome-key 890 graysky 0r CHR 1,3 0t0 1028 /dev/null
    gnome-key 890 graysky 1r CHR 1,3 0t0 1028 /dev/null
    gnome-key 890 graysky 2r CHR 1,3 0t0 1028 /dev/null
    sh 892 graysky 0r CHR 1,3 0t0 1028 /dev/null
    xfce4-ses 915 graysky 0r CHR 1,3 0t0 1028 /dev/null
    dbus-laun 918 graysky 0r CHR 1,3 0t0 1028 /dev/null
    dbus-laun 918 graysky 1u CHR 1,3 0t0 1028 /dev/null
    dbus-laun 918 graysky 2u CHR 1,3 0t0 1028 /dev/null
    dbus-daem 919 graysky 0u CHR 1,3 0t0 1028 /dev/null
    dbus-daem 919 graysky 1u CHR 1,3 0t0 1028 /dev/null
    dbus-daem 919 graysky 2u CHR 1,3 0t0 1028 /dev/null
    polkitd 921 polkitd 0u CHR 1,3 0t0 1028 /dev/null
    polkitd 921 polkitd 1u CHR 1,3 0t0 1028 /dev/null
    polkitd 921 polkitd 2u CHR 1,3 0t0 1028 /dev/null
    xfconfd 929 graysky 0u CHR 1,3 0t0 1028 /dev/null
    xfconfd 929 graysky 1u CHR 1,3 0t0 1028 /dev/null
    xfconfd 929 graysky 2u CHR 1,3 0t0 1028 /dev/null
    xfwm4 936 graysky 0r CHR 1,3 0t0 1028 /dev/null
    xfce4-pan 940 graysky 0r CHR 1,3 0t0 1028 /dev/null
    Thunar 942 graysky 0r CHR 1,3 0t0 1028 /dev/null
    xfdesktop 944 graysky 0r CHR 1,3 0t0 1028 /dev/null
    polkit-gn 946 graysky 0r CHR 1,3 0t0 1028 /dev/null
    xscreensa 955 graysky 0r CHR 1,3 0t0 1028 /dev/null
    xfce4-pow 956 graysky 0u CHR 1,3 0t0 1028 /dev/null
    xfce4-pow 956 graysky 1u CHR 1,3 0t0 1028 /dev/null
    xfce4-pow 956 graysky 2u CHR 1,3 0t0 1028 /dev/null
    xfsetting 957 graysky 0r CHR 1,3 0t0 1028 /dev/null
    pulseaudi 963 graysky 0r CHR 1,3 0t0 1028 /dev/null
    pulseaudi 963 graysky 1w CHR 1,3 0t0 1028 /dev/null
    pulseaudi 963 graysky 2w CHR 1,3 0t0 1028 /dev/null
    pulseaudi 963 graysky 16u CHR 116,10 0t0 11930 /dev/snd/controlC0
    pulseaudi 963 graysky 23u CHR 116,10 0t0 11930 /dev/snd/controlC0
    pulseaudi 963 graysky 28u CHR 116,10 0t0 11930 /dev/snd/controlC0
    at-spi-bu 965 graysky 0u CHR 1,3 0t0 1028 /dev/null
    at-spi-bu 965 graysky 1u CHR 1,3 0t0 1028 /dev/null
    at-spi-bu 965 graysky 2u CHR 1,3 0t0 1028 /dev/null
    rtkit-dae 966 rtkit 0r CHR 1,3 0t0 1028 /dev/null
    dbus-daem 972 graysky 0r CHR 1,3 0t0 1028 /dev/null
    dbus-daem 972 graysky 1u CHR 1,3 0t0 1028 /dev/null
    dbus-daem 972 graysky 2u CHR 1,3 0t0 1028 /dev/null
    upowerd 974 root 0r CHR 1,3 0t0 1028 /dev/null
    upowerd 974 root 8w CHR 10,62 0t0 1118 /dev/cpu_dma_latency
    upowerd 974 root 10w CHR 10,61 0t0 1119 /dev/network_latency
    gconf-hel 1004 graysky 0r CHR 1,3 0t0 1028 /dev/null
    gconf-hel 1004 graysky 2w CHR 1,3 0t0 1028 /dev/null
    gconfd-2 1006 graysky 0u CHR 1,3 0t0 1028 /dev/null
    gconfd-2 1006 graysky 1u CHR 1,3 0t0 1028 /dev/null
    gconfd-2 1006 graysky 2u CHR 1,3 0t0 1028 /dev/null
    gconfd-2 1006 graysky 3u CHR 1,3 0t0 1028 /dev/null
    panel-7-m 1007 graysky 0r CHR 1,3 0t0 1028 /dev/null
    panel-7-m 1007 graysky 8u CHR 116,10 0t0 11930 /dev/snd/controlC0
    at-spi2-r 1010 graysky 0r CHR 1,3 0t0 1028 /dev/null
    at-spi2-r 1010 graysky 1u CHR 1,3 0t0 1028 /dev/null
    at-spi2-r 1010 graysky 2u CHR 1,3 0t0 1028 /dev/null
    panel-4-d 1011 graysky 0r CHR 1,3 0t0 1028 /dev/null
    xfce4-sen 1012 graysky 0r CHR 1,3 0t0 1028 /dev/null
    panel-2-c 1014 graysky 0r CHR 1,3 0t0 1028 /dev/null
    panel-6-s 1015 graysky 0r CHR 1,3 0t0 1028 /dev/null
    xfce4-net 1018 graysky 0r CHR 1,3 0t0 1028 /dev/null
    gvfsd 1021 graysky 0u CHR 1,3 0t0 1028 /dev/null
    gvfsd 1021 graysky 1u CHR 1,3 0t0 1028 /dev/null
    gvfsd 1021 graysky 2u CHR 1,3 0t0 1028 /dev/null
    panel-8-a 1022 graysky 0r CHR 1,3 0t0 1028 /dev/null
    gvfsd-fus 1030 graysky 0r CHR 1,3 0t0 1028 /dev/null
    gvfsd-fus 1030 graysky 1w CHR 1,3 0t0 1028 /dev/null
    gvfsd-fus 1030 graysky 2w CHR 1,3 0t0 1028 /dev/null
    gvfsd-fus 1030 graysky 3u CHR 10,229 0t0 10615 /dev/fuse
    gvfs-udis 1045 graysky 0u CHR 1,3 0t0 1028 /dev/null
    gvfs-udis 1045 graysky 1u CHR 1,3 0t0 1028 /dev/null
    gvfs-udis 1045 graysky 2u CHR 1,3 0t0 1028 /dev/null
    udisksd 1047 root 0u CHR 1,3 0t0 1028 /dev/null
    udisksd 1047 root 1u CHR 1,3 0t0 1028 /dev/null
    udisksd 1047 root 2u CHR 1,3 0t0 1028 /dev/null
    gvfs-gpho 1056 graysky 0u CHR 1,3 0t0 1028 /dev/null
    gvfs-gpho 1056 graysky 1u CHR 1,3 0t0 1028 /dev/null
    gvfs-gpho 1056 graysky 2u CHR 1,3 0t0 1028 /dev/null
    gvfs-afc- 1060 graysky 0u CHR 1,3 0t0 1028 /dev/null
    gvfs-afc- 1060 graysky 1u CHR 1,3 0t0 1028 /dev/null
    gvfs-afc- 1060 graysky 2u CHR 1,3 0t0 1028 /dev/null
    gvfsd-tra 1070 graysky 0r CHR 1,3 0t0 1028 /dev/null
    gvfsd-tra 1070 graysky 1u CHR 1,3 0t0 1028 /dev/null
    gvfsd-tra 1070 graysky 2u CHR 1,3 0t0 1028 /dev/null
    xfce4-ter 11701 graysky 0r CHR 1,3 0t0 1028 /dev/null
    xfce4-ter 11701 graysky 9u CHR 5,2 0t0 1107 /dev/ptmx
    xfce4-ter 11701 graysky 14u CHR 5,2 0t0 1107 /dev/ptmx
    gnome-pty 11704 graysky 2r CHR 1,3 0t0 1028 /dev/null
    chromium 22596 graysky 0r CHR 1,3 0t0 1028 /dev/null
    chromium 22596 graysky 30r CHR 1,9 0t0 1033 /dev/urandom
    chromium 22598 graysky 0r CHR 1,3 0t0 1028 /dev/null
    chromium- 22599 graysky 0r CHR 1,3 0t0 1028 /dev/null
    chromium 22601 graysky 0r CHR 1,3 0t0 1028 /dev/null
    chromium 22601 graysky 11r CHR 1,9 0t0 1033 /dev/urandom
    nacl_help 22607 graysky 0r CHR 1,3 0t0 1028 /dev/null
    nacl_help 22607 graysky 4r CHR 1,9 0t0 1033 /dev/urandom
    chromium 22608 graysky 0r CHR 1,3 0t0 1028 /dev/null
    chromium 22608 graysky 11r CHR 1,9 0t0 1033 /dev/urandom
    chromium 22636 graysky 0r CHR 1,3 0t0 1028 /dev/null
    chromium 22636 graysky 3r CHR 1,9 0t0 1033 /dev/urandom
    chromium 22636 graysky 11r CHR 1,9 0t0 1033 /dev/urandom
    chromium 22641 graysky 0r CHR 1,3 0t0 1028 /dev/null
    chromium 22641 graysky 3r CHR 1,9 0t0 1033 /dev/urandom
    chromium 22641 graysky 11r CHR 1,9 0t0 1033 /dev/urandom
    chromium 22662 graysky 0r CHR 1,3 0t0 1028 /dev/null
    chromium 22662 graysky 3r CHR 1,9 0t0 1033 /dev/urandom
    chromium 22662 graysky 11r CHR 1,9 0t0 1033 /dev/urandom
    chromium 22668 graysky 0r CHR 1,3 0t0 1028 /dev/null
    chromium 22668 graysky 3r CHR 1,9 0t0 1033 /dev/urandom
    chromium 22668 graysky 11r CHR 1,9 0t0 1033 /dev/urandom
    chromium 22676 graysky 0r CHR 1,3 0t0 1028 /dev/null
    chromium 22676 graysky 3r CHR 1,9 0t0 1033 /dev/urandom
    chromium 22676 graysky 11r CHR 1,9 0t0 1033 /dev/urandom
    chromium 22691 graysky 0r CHR 1,3 0t0 1028 /dev/null
    chromium 22691 graysky 3r CHR 1,9 0t0 1033 /dev/urandom
    chromium 22691 graysky 11r CHR 1,9 0t0 1033 /dev/urandom
    chromium 22714 graysky 0r CHR 1,3 0t0 1028 /dev/null
    chromium 22714 graysky 3r CHR 1,9 0t0 1033 /dev/urandom
    chromium 22714 graysky 11r CHR 1,9 0t0 1033 /dev/urandom
    chromium 22744 graysky 0r CHR 1,3 0t0 1028 /dev/null
    chromium 22744 graysky 11r CHR 1,9 0t0 1033 /dev/urandom
    gvfsd-met 25973 graysky 0u CHR 1,3 0t0 1028 /dev/null
    gvfsd-met 25973 graysky 1u CHR 1,3 0t0 1028 /dev/null
    gvfsd-met 25973 graysky 2u CHR 1,3 0t0 1028 /dev/null
    chromium 30037 graysky 0r CHR 1,3 0t0 1028 /dev/null
    chromium 30037 graysky 3r CHR 1,9 0t0 1033 /dev/urandom
    chromium 30037 graysky 11r CHR 1,9 0t0 1033 /dev/urandom

  • Business Process: Final Demand Plan Key Figure in the past for APO DP

    Hello APO process experts,
    I have a BPX question for APO DP process owners, consultants or gurus:
    1.- Have you ever seen in any SAP APO customer, that in their APO DP planning books, they chose or designed to update the Final Demand Plan key Figure in the past closed historical buckets with the Actual Sales?
    Meaning that that in the Past closed historical Buckets, Actual Sales and Final Demand Plan, look exactily the same.
    Meaning that they create a dynamic Final Demand Plan key figure so to speak, yet they did not choose to create a key figure that also keeps the Final Demand Plan as originally entered.
    2.- Would you think that doing the above, without keeping any keyfigure in the planning book that actually shows what the Final Demand Plan was in the Past, is a good practice?
         The Demand Planner could still retreive the Final Demand Plan from BI reports, yet, dynamically they could never really see it in the planning book.
    3.- Do you think the above dynamic key figure would bias the Demand Planner's behaviour to manipulate the future open Final Demand Plan buckets to enter volumes in order sum to a monthly total, independently of what were the over and underachievements in the past (since they are not really visible to him) ?
    I would really like your feedback,
    thanks in advance!
    JD Loera

    Hello Thanks for your feedback.
    I agree with your point, and I hope that i still get more feedback from other Business Process Experts.
    I am trying to build a case for my client, to challenge the decision of their initial design as this practice is going to hurt the Demand Management process, if what they intend to achieve is a true "unconstrained demand plan"
    Much appreciated
    JD Loera

  • Unable to create a new planning application in EPM11.1.3.1

    Hello Guru's
    I am not able to create a new planning application in the existing environment.
    In the last week i created a new application to migrate from prod to dev and it was successfull but it was failing to now for creating new app.
    When i checked the database connection and the essbase connection both are fine.
    I have checked the logs in the path
    D:\Hyperion\deployments\WebLogic9\servers\HyperionPlanning\logs
    Environment: 11.1.3.1
    error:
    <Jan 24, 2012 5:15:46 AM EST> <Error> <JMS> <som-epmdevapp02> <HyperionPlanning> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1327400146994> <BEA-040373> <Could not find a license for JMS. Please contact BEA to get a license.>
    Cheers,
    Raj

    check the logs clearly, i think schema/user doesn't have admin rights. please check with DBA team and try creating again.
    If you have DB admin access, you can create new schema/user and provide Full admin privilizes to the user (even you can ask your DBA team), then create DSN & validate, then try creating new application (make sure you are giving all correct info while creating app), it should work.....if still giving any error, please let us know what's the error in front end & at the same time check the logs.....
    with time stamp you tried to create application, logs will be there, check clearly you will definitly find the cause & issue.

  • Issue with Creating Forecast Profiles/Forecasting

    Hello Experts,
    We are facing an issue with creating forecast profiles.
    We have two FYVs defined in our system, one with 52/53 fiscal weeks (Fiscal Variant W1) and the other with 12 fiscal months (FYV M1), Our storage buckets profile (STP) uses W1,  since we have many dataviews that use W1 as the FYV in the attached Planning buckets profile(PBP). We also have dataviews that display in fiscal months (use FYV M1 in the attached PBP), data from Fiscal weeks will be aggregated and shown in Fiscal months . We need to do forecasting using FYV M1, that is forecast in Fiscal Months , but since the storage bucket profile has W1 which is used in the Planning Area config. We are unable to create any Forecast profiles with FYV M1 . Please note that we cannot use M1 in STP because when we used M1 in STP, we could not create  dataviews in fiscal weeks (using W1)
    1. Is there any way we can forecast using M1 while having assinged W1 to the Planning area/STP ?
    2. OR we are willing to assing M1 to PA, provided we can use W1 in some of the dataviews, unfortunately we were unable to do this, though the vice versa is possible i,e. we could have W1 in STP and M1 in some of the related PBPs/ Weekly Dataviews.
    Please let me know if any of these are possible or if there is any alternative way to do forecasting in Fisc Months.
    Thanks
    Tej

    Hi,
    You are correct, the Storage bucket profile always has to be at a detailed level. Time bucket profile can be at higher levels like monthly, quarterly etc.,
    Coming to the root of your problem, which is you are unable to forecast at a level other that what is specified in your Storage bucket profile, unfortunately the answer is no.
    You can do a forecast only at the level at which the data is stored and not at the level at which the data is viewed.
    One work around for this is to create an additional planning area with the same MPOS and this addl PA can contain only those bare minimum KF required for your forecast. After you generate your forecast, you can copy it to your Weeks based PA and then proceed from there. This copy of KF between PA is much faster as it will happen at LC level and should not cause time delays.
    NOTE - You have to exercise caution when you are using 2 periodicities i.e., weeks and months, if you are using standard SAP calendar then you are good to go. If you are creating custom Fiscal variants pls ensure the start and end of a month is same in both the weekly and monthly variants. Failing which, there will be mismatch of data between the two dataviews.
    Hope this helps.
    Thanks
    Mani Suresh

  • How to create a service entry sheet based from the PO

    how to create a service entry sheet based from the PO
    Gurus,
    I am creating a service entry sheet from the PO but I am getting an error of u201CPlease maintain services or limits Message no. SE029- Diagnosis(You cannot enter data until the PO item has been maintained correctly) u201C
    The document type of the PO is standard NB, account assignment category is Q- (Proj make to order) and the item category is D(service). Then I am trying also create a PR using account assignment category is Q- (Proj make to order) and the item category is D(service) but still cannot proceed, a message asking me to enter a service entry number. What I know the process is create a PO(maybe based from PR) then post the GR then create a service entry sheet in ML81N but I cannot proceed. Just creating a PR or PO using those mentioned account assignment and item category and getting an error of need to enter a service entry sheet number.
    Please help.thanks!

    HI,
    Process for Creating Service Entry Sheet
    Transaction Code :    ML81N
    1)To open the respective Purchase Order, Click on the u2018Other Purchase Orderu2019, then enter the Purchase Order No.
    2)Click on the u2018Create Entry Sheetu2019 icon(3rd Icon on Top-Left)
    3)Give Short Text (e.g. R/A Bill No. 1) and top service entry sheet number also generated.
    4)Click u2018Service Selectionu2019 Icon on the Bottom of the Screen.
    5)For the 1st Time, when we are making Service Entry Sheet for a respective Purchase Order, we need to u201CAdopt Full Quantityu201D by clicking the Check box next to it, then Enter.  (*For the next time, no adoption is required, just continue)
    6)Select the respective Services by clicking on the Left Hand Side, then Click u2018Servicesu2019 (Adopt services) icon on the Top.
    7)Give the completed Quantity, then Click u2018Acceptu2019 icon(a green flag on the top)
    8)Save .
    9)Service Entry Sheet is SAVED and account posting made.
    Hope, it is useful for you,
    Regards,
    K.Rajendran

  • Transfer Demand Plan to R/3

    Hi All,
    I need to transfer the Demand Plan to R/3.
    Please let me know the importance of Period Split profile in the transfer profile screen.
    we are using the posting periodicity in the planning book.
    What is the use of Distribution Function?
    Is it possible to transfer the demand in days for first 3/4 weeks and then in weeks for rest of the period.
    Your input/suggestions are highly appreciated.
    Regards,
    Prabhat
    Edited by: Prabhat Sahay on Sep 29, 2010 3:37 PM

    Prabhat ,
    Period split profile is used when your demand plan is calculated in a particular periodicity (say month) and you want to see the result in ERP with a more detailed periodicity(say weeks or days).  In case your demand plan periodicity is the same as the periodicity in which you want to see the results in R/3 then there is no need for a period split profile. As the name implies period split , splits the value to a finer level.
    Distribution function is used to assign weightage to the different periods. We divide the value of a period equally so iam not very sure about the distribution function concept. But i think , if you have demand plan of 100 CS for  a month and you are splitting it into weeks  (defined in period split profile) , and decide to give more weightage to weeks 1 and 2 compared to 3 and 4 then the distribution function comes into play . Not sure about it though.
    Also refer to consulting note 403050 . It talks about releasing forecast to sNP, but then the concept of period split profile / distribution function is the same irrespective of the target to which the demand plan is released.
    Thanks
    Saradha

Maybe you are looking for

  • Web Service over SSL hangs if sent data size exceeds around 12Kb

    Hi, I have a Web Service running on a WebLogic Server 10.3. One of its purposes is to send and receive documents over a one-way SSL connection. The service runs fine if the documents are smaller than around 12Kb, however if its larger than that, the

  • Adobe Air Installation Error Again & Again...

    PROBLEM: Every time the same message: An error occurred while installing Adobe AIR. Installation may not be alowed by your aministrator. Please contact your administrator. SYSTEM PROPERTIES: Sony Vaio vpceh25en Windows 8 Pro 64bit 4gb ram TRIED: CCle

  • USER EXIT- Structure or View as well?

    Hello friends i have a quick question. I created a User exit by adding the ZZfields in the Structure.When i replicated into BW i couldnt see those 3 zz fields.When i went back to the Generic Datasource and checked in 'Display Field List' i see my fie

  • Can I use a iMac as a TV using Elgato as the tv-tuner?

    Does anyone knows if a iMac can be used as a TV...... Ingolf in Alice Springs

  • Different PLE's on different numa nodes

    We are running 2 node Availaibility Group FC servers with 3 numa nodes. On our Prod servers the PLE's differ from node to node on the Primary node yet and are the same on the Secondary node yet on our Accp servers the PLE's are identical on each node