Root Components Insertion Not Processed
HI ,
I am trying to import Solution from Microsoft Dynamics CRM 2015 On Prem to Microsoft Dynamics CRM 2015 online
and I get solution failed saying "Root Components Insertion" not Processed.
Have anybody encountered this issue ?
Vilas Magar http://microsoftcrmworld.blogspot.com/
Hello,
Have you tried to check search engines? I've found following pages:
https://social.microsoft.com/Forums/en-US/fddff17f-1cca-4d33-a666-60c08ad76b0c/crm-2011-import-failure?forum=crmdevelopment
http://www.bds-experts.com/crm-2011-import-failed-root-component-insertion-unprocessed/
Dynamics CRM MVP
My blog
Similar Messages
-
Hi,
I am using JDEV 11.1.2.1.0
I am getting the following error :-
<RegionRenderer> <encodeAll> The region component with id: pt1:r1 has detected a page fragment with multiple root components. Fragments with more than one root component may not display correctly in a region and may have a negative impact on performance. It is recommended that you restructure the page fragment to have a single root component.
Piece of code is for region is:-
<f:facet name="second">
<af:panelStretchLayout id="pa1"
binding="#{backingBeanScope.Assign.pa1}">
<f:facet name="center">
<af:region value="#{bindings.tfdAssignGraph1.regionModel}" id="r1"
binding="#{backingBeanScope.Assign.r1}"/>
</f:facet>
</af:panelStretchLayout>
</f:facet>
How do I resolve it ?
Thanks,Hi,
I see at least 3 errors
1. <RegionRenderer> <encodeAll> The region component with id: pt1:r1 has detected a page fragment with multiple root components.
the page fragment should only have a single component under the jsp:root tag. If you see more than one, wrap them in e.g. an af:panelGroupLayout or af:group component
2. SAPFunction.jspx/.xml" has an invalid character ".".
check the document (you can open it in JDeveloper if the customization was a seeded one. Seems that editing this file smething has gone bad
3. The expression "#{bindings..regionModel}" (that was specified for the RegionModel "value" attribute of the region component with id "pePanel") evaluated to null.
"pageeditorpanel" does seem to be missing in the PageDef file of the page holding the region
Frank -
Bill and Routing Interface concurrent program is not processing component
Hi All,
While running Bill and Routing Interface concurrent program,not processing the component item(Line Item) from bom_inventory_comps_interface,still it show process flag as 1. where as billing item(Header Item) is successfully done..i have tried with all the option by giving component sequence id and batch id still it is not processing component item..
Is it recommendable to give bill sequence id and component sequence id in interface table using bom_inventory_component_s sequence..
I will be very pleased if i listen from u guys... Plz help me in resolving this issue..
My Header Insert Stmt..
INSERT INTO apps.bom_bill_of_mtls_interface@system_link_visma
(assembly_item_id, organization_id,
alternate_bom_designator, last_update_date,
last_updated_by, creation_date, created_by,
revision, item_number, item_description,
implementation_date, transaction_type,
process_flag, assembly_type, batch_id
VALUES (l_inv_item_id, l_org_id,
NULL, SYSDATE,
1318, SYSDATE, 1318,
l_revision, l_item_num, l_description,
SYSDATE, 'CREATE',
1, 1, 10003535
Component Insert stmt
INSERT INTO apps.bom_inventory_comps_interface@system_link_visma
(operation_seq_num, component_item_id,
item_num, basis_type, component_quantity,
auto_request_material, effectivity_date,
disable_date, planning_factor,
component_yield_factor,
enforce_int_requirements,
include_in_cost_rollup, wip_supply_type,
supply_subinventory, supply_locator_id,
check_atp, optional,
mutually_exclusive_options,
low_quantity, high_quantity,
so_basis, shipping_allowed,
include_on_ship_docs, required_to_ship,
required_for_revenue, component_remarks,
transaction_type, process_flag,
assembly_item_id, component_item_number,
batch_id,component_sequence_id
VALUES (l_operation_seq, l_comp_item_id,
cur2.item_sequence, l_basis, cur2.quantity,
l_auto_request_mtls, cur2.from_date,
cur2.TO_DATE, cur2.planning_factor,
cur2.yield_factor,
l_enforce_int_requirements,
l_include_in_cost_rollup, l_supply_type,
l_supply_subinventory, NULL,
l_check_atp, l_optional,
l_mutually_exclusive_options,
cur2.minimum_quantity, cur2.maximum_quantity,
l_sale_order_basis, l_shippable_flag,
l_include_on_ship_docs, l_required_to_ship,
l_required_for_revenue, cur2.comments,
'CREATE', 1,
l_inv_item_id, l_comp_item_num,
10003535,apps.bom_inventory_components_s.nextval@system_link_visma
For Subcomponent Insert Stmt
INSERT INTO apps.bom_sub_comps_interface@system_link_visma
(substitute_component_id,
substitute_item_quantity,
assembly_item_id, component_item_id,
operation_seq_num, organization_id,
substitute_comp_number,
component_item_number,
assembly_item_number,
transaction_type, process_flag,
enforce_int_requirements,
effectivity_date,component_sequence_id,batch_id
VALUES (l_sub_comp_item_id,
cur3.quantity,
l_inv_item_id, l_comp_item_id,
cur2.operation_sequence, l_org_id,
l_sub_comp_item_num,
l_comp_item_num,
l_item_num,
'CREATE', 1,
l_enforce_int_requirements,
SYSDATE,apps.bom_inventory_components_s.currval@system_link_visma,10003535
Thanks
Raman Sharma
Edited by: 929841 on May 4, 2012 12:28 AM
Edited by: 929841 on May 4, 2012 2:58 AMYou need to populate the organization_id or organization_code in bom_inventory_comps_interface.
Here is a minimal insert
INSERT INTO bom.bom_inventory_comps_interface
(operation_seq_num, last_update_date, last_updated_by,
creation_date, created_by, process_flag, transaction_type,
bom_item_type,
effectivity_date, organization_code, assembly_item_number,
item_num, component_item_number, component_quantity
VALUES (1 -- op_seq_num
,SYSDATE, 1433
,SYSDATE, 1433, 1 -- process_flag
,'Create',
4 -- bom_item_type 1 Model; 2 Option class; 3 Planning; 4 Standard; 5 Product family
,SYSDATE - 1, 'PUB', 'SSGPARENT1' -- assembly_item_number
,10 --item_num
, 'SSGCOMP1' -- component_item_number
, 10 --qty
)Sandeep Gandhi -
Hi,
We have some problems with our Root CA. I can se a lot of failed requests. with the event id 22: in the logs. The description is: Active Directory Certificate Services could not process request 3686 due to an error: The revocation function was unable to
check revocation because the revocation server was offline. 0x80092013 (-2146885613). The request was for CN=xxxxx.ourdomain.com. Additional information: Error Verifying Request Signature or Signing Certificate
A couple of months ago we decomissioned one of our old 2003 DCs and it looks like this server might have had something to do with the CA structure but I am not sure whether this was in use or not since I could find the role but I wasn't able to see any existing
configuration.
Let's say that this server was previously responsible for the certificates and was the server that should have revoked the old certs, what can I do know to try and correct the problem?
Thank you for your help
//Crishello,
let me recap first:
you see these errors on a ROOT CA. so it seems like the ROOT CA is also operating as an ISSUING CA. Some clients try to issue a new certificate from the ROOT CA and this fails with your error mentioned.
do you say that you had a PREVIOUS CA which you decomissioned, and you now have a brand NEW CA, that was built as a clean install? When you decommissioned the PREVIOUS CA, that was your design decision to don't bother with the current certificates that it
issued and which are still valid, right?
The error says, that the REQUEST signature cannot be validated. REQUESTs are signed either by itself (self-signed) or if they are renewal requests, they would be signed with the previous certificate which the client tries to renew. The self-signed REQUESTs
do not contain CRL paths at all.
So this implies to me as these requests that are failing are renewal requests. Renewal requests would contain CRL paths of the previous certificates that are nearing their expiration.
As there are many such REQUEST and failures, it probably means that the clients use AUTOENROLLMENT, which tries to renew their current, but shortly expiring, certificates during (by default) their last 6 weeks of lifetime.
As you decommissioned your PREVIOUS CA, it does not issue CRL anymore and the current certificates cannot be checked for validity.
Thus, if the renewal tries to renew them by using the NEW CA, your NEW CA cannot validate CRL of the PREVIOUS CA and will not issue new certificates.
But it would not issue new certificates anyway even if it was able to verify the PREVIOUS CA's CRL, as it seems your NEW CA is completely brand new, without being restored from the PREVIOUS CA's database. Right?
So simply don't bother :-) As long as it was your design to decommission the PREVIOUS CA without bothering with its already issued certificates.
The current certificates which autoenrollment tries to renew cannot be checked for validity. They will also slowly expire over the next 6 weeks or so. After that, autoenrollment will ask your NEW CA to issue a brand new certificate without trying to renew.
Just a clean self-signed REQUEST.
That will succeed.
You can also verify this by trying to issue a certificate on an affected machine manually from Certificates MMC.
ondrej. -
[SOLVED] GRUB2 does not process hooks: System doesn't boot
My initial system was an SSD where /dev/sda1 was my boot partition and /dev/sda2 was an (encrypted) LVM containing home, root, var and swap. Since all partitions were ext3, I decided to do a clean format to ext4 and copy my data back on the partitions. First I archived everything but home with the arch live CD onto a server with:
rsync -a /mnt/* root@server:/path/to/backupdir/
Everything went fine, but after copying back, the system would not boot (it was GRUB Legacy). Since it was not in the MBR but on sda1, I figured I could upgrade to GRUB2 now. So I followed the described procedure (from a live CD) and installed GRUB2 (this time into the MBR of sda. I then regenerated the image via mkinitcpio -p linux and generated a configuration file.
When I try to start the system, GRUB2 gets loaded, but after the two messages for loading the ramdisk it remains silent for some time (no output at all) until it finally complains it cannot find my root. But I did not see any output of any hook being processed (including encrypt) so of course it cannot find my root, since it is still encrypted.
I reformatted my boot partition again and reinstalled and regenerated everything again (I copied the directory contents from my backup but moved the old grub folder). Still the same issue. I know I could probably just reinstall everything and restore the settings, but I'd really prefer to restore my system, since this should be a lot faster.
Here are the relevant configuration files:
rc.conf
# /etc/rc.conf - Main Configuration for Arch Linux
# LOCALIZATION
# LOCALE: available languages can be listed with the 'locale -a' command
# LANG in /etc/locale.conf takes precedence
# DAEMON_LOCALE: If set to 'yes', use $LOCALE as the locale during daemon
# startup and during the boot process. If set to 'no', the C locale is used.
# HARDWARECLOCK: set to "", "UTC" or "localtime", any other value will result
# in the hardware clock being left untouched (useful for virtualization)
# Note: Using "localtime" is discouraged, using "" makes hwclock fall back
# to the value in /var/lib/hwclock/adjfile
# TIMEZONE: timezones are found in /usr/share/zoneinfo
# Note: if unset, the value in /etc/localtime is used unchanged
# KEYMAP: keymaps are found in /usr/share/kbd/keymaps
# CONSOLEFONT: found in /usr/share/kbd/consolefonts (only needed for non-US)
# CONSOLEMAP: found in /usr/share/kbd/consoletrans
# USECOLOR: use ANSI color sequences in startup messages
LOCALE="en_US.UTF-8"
DAEMON_LOCALE="no"
HARDWARECLOCK="UTC"
TIMEZONE="Europe/Berlin"
KEYMAP="de-latin1-nodeadkeys"
#CONSOLEFONT=
#CONSOLEMAP=
USECOLOR="yes"
# HARDWARE
# MODULES: Modules to load at boot-up. Blacklisting is no longer supported.
# Replace every !module by an entry as on the following line in a file in
# /etc/modprobe.d:
# blacklist module
# See "man modprobe.conf" for details.
MODULES=(acpi-cpufreq cpufreq_ondemand tun fuse vboxdrv)
# Udev settle timeout (default to 30)
UDEV_TIMEOUT=30
# Scan for FakeRAID (dmraid) Volumes at startup
USEDMRAID="no"
# Scan for BTRFS volumes at startup
USEBTRFS="no"
# Scan for LVM volume groups at startup, required if you use LVM
USELVM="yes"
# NETWORKING
# HOSTNAME: Hostname of machine. Should also be put in /etc/hosts
HOSTNAME="archlaptop"
# Use 'ip addr' or 'ls /sys/class/net/' to see all available interfaces.
# Wired network setup
# - interface: name of device (required)
# - address: IP address (leave blank for DHCP)
# - netmask: subnet mask (ignored for DHCP) (optional, defaults to 255.255.255.0)
# - broadcast: broadcast address (ignored for DHCP) (optional)
# - gateway: default route (ignored for DHCP)
# Static IP example
# interface=eth0
# address=192.168.0.2
# netmask=255.255.255.0
# broadcast=192.168.0.255
# gateway=192.168.0.1
# DHCP example
# interface=eth0
# address=
# netmask=
# gateway=
interface=wlan0
address=
netmask=
broadcast=
gateway=
# Setting this to "yes" will skip network shutdown.
# This is required if your root device is on NFS.
NETWORK_PERSIST="no"
# Enable these netcfg profiles at boot-up. These are useful if you happen to
# need more advanced network features than the simple network service
# supports, such as multiple network configurations (ie, laptop users)
# - set to 'menu' to present a menu during boot-up (dialog package required)
# - prefix an entry with a ! to disable it
# Network profiles are found in /etc/network.d
# This requires the netcfg package
NETWORKS=(FlosAP)
# DAEMONS
# Daemons to start at boot-up (in this order)
# - prefix a daemon with a ! to disable it
# - prefix a daemon with a @ to start it up in the background
# If you are sure nothing else touches your hardware clock (such as ntpd or
# a dual-boot), you might want to enable 'hwclock'. Note that this will only
# make a difference if the hwclock program has been calibrated correctly.
# If you use a network filesystem you should enable 'netfs'.
DAEMONS=(syslog-ng dbus acpid crond alsa networkmanager @bumblebeed laptop-mode !hwclock ntpd psd)
mkinitcpio.conf
# vim:set ft=sh
# MODULES
# The following modules are loaded before any boot hooks are
# run. Advanced users may wish to specify all system modules
# in this array. For instance:
# MODULES="piix ide_disk reiserfs"
MODULES=""
# BINARIES
# This setting includes any additional binaries a given user may
# wish into the CPIO image. This is run first, so it may be used to
# override the actual binaries used in a given hook.
# (Existing files are NOT overwritten if already added)
# BINARIES are dependency parsed, so you may safely ignore libraries
BINARIES=""
# FILES
# This setting is similar to BINARIES above, however, files are added
# as-is and are not parsed in any way. This is useful for config files.
# Some users may wish to include modprobe.conf for custom module options
# like so:
# FILES="/etc/modprobe.d/modprobe.conf"
FILES=""
# HOOKS
# This is the most important setting in this file. The HOOKS control the
# modules and scripts added to the image, and what happens at boot time.
# Order is important, and it is recommended that you do not change the
# order in which HOOKS are added. Run 'mkinitcpio -H <hook name>' for
# help on a given hook.
# 'base' is _required_ unless you know precisely what you are doing.
# 'udev' is _required_ in order to automatically load modules
# 'filesystems' is _required_ unless you specify your fs modules in MODULES
# Examples:
## This setup specifies all modules in the MODULES setting above.
## No raid, lvm2, or encrypted root is needed.
# HOOKS="base"
## This setup will autodetect all modules for your system and should
## work as a sane default
# HOOKS="base udev autodetect pata scsi sata filesystems"
## This is identical to the above, except the old ide subsystem is
## used for IDE devices instead of the new pata subsystem.
# HOOKS="base udev autodetect ide scsi sata filesystems"
## This setup will generate a 'full' image which supports most systems.
## No autodetection is done.
# HOOKS="base udev pata scsi sata usb filesystems"
## This setup assembles a pata mdadm array with an encrypted root FS.
## Note: See 'mkinitcpio -H mdadm' for more information on raid devices.
# HOOKS="base udev pata mdadm encrypt filesystems"
## This setup loads an lvm2 volume group on a usb device.
# HOOKS="base udev usb lvm2 filesystems"
HOOKS="base udev autodetect pata scsi sata keymap encrypt lvm2 resume filesystems usbinput"
# COMPRESSION
# Use this to compress the initramfs image. With kernels earlier than
# 2.6.30, only gzip is supported, which is also the default. Newer kernels
# support gzip, bzip2 and lzma. Kernels 2.6.38 and later support xz
# compression.
#COMPRESSION="gzip"
#COMPRESSION="bzip2"
#COMPRESSION="lzma"
#COMPRESSION="xz"
#COMPRESSION="lzop"
# COMPRESSION_OPTIONS
# Additional options for the compressor
#COMPRESSION_OPTIONS=""
grub.cfg
# DO NOT EDIT THIS FILE
# It is automatically generated by grub-mkconfig using templates
# from /etc/grub.d and settings from /etc/default/grub
### BEGIN /etc/grub.d/00_header ###
insmod part_gpt
insmod part_msdos
if [ -s $prefix/grubenv ]; then
load_env
fi
set default="0"
if [ x"${feature_menuentry_id}" = xy ]; then
menuentry_id_option="--id"
else
menuentry_id_option=""
fi
export menuentry_id_option
if [ "${prev_saved_entry}" ]; then
set saved_entry="${prev_saved_entry}"
save_env saved_entry
set prev_saved_entry=
save_env prev_saved_entry
set boot_once=true
fi
function savedefault {
if [ -z "${boot_once}" ]; then
saved_entry="${chosen}"
save_env saved_entry
fi
function load_video {
if [ x$feature_all_video_module = xy ]; then
insmod all_video
else
insmod efi_gop
insmod efi_uga
insmod ieee1275_fb
insmod vbe
insmod vga
insmod video_bochs
insmod video_cirrus
fi
if loadfont unicode ; then
set gfxmode=auto
load_video
insmod gfxterm
set locale_dir=$prefix/locale
set lang=en_US
insmod gettext
fi
terminal_input console
terminal_output gfxterm
set timeout=5
### END /etc/grub.d/00_header ###
### BEGIN /etc/grub.d/10_linux ###
menuentry 'Arch GNU/Linux, with Linux core repo kernel' --class arch --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-core repo kernel-true-194e65d3-b357-430d-b4bb-67a8300d287d' {
load_video
set gfxpayload=keep
insmod gzio
insmod part_msdos
insmod ext2
set root='hd0,msdos1'
if [ x$feature_platform_search_hint = xy ]; then
search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos1 --hint-efi=hd0,msdos1 --hint-baremetal=ahci0,msdos1 b69dee88-a8c9-4af7-a938-7ca6c8ff368c
else
search --no-floppy --fs-uuid --set=root b69dee88-a8c9-4af7-a938-7ca6c8ff368c
fi
echo 'Loading Linux core repo kernel ...'
linux /vmlinuz-linux root=/dev/mapper/VolGroup00-root ro quiet
echo 'Loading initial ramdisk ...'
initrd /initramfs-linux.img
menuentry 'Arch GNU/Linux, with Linux core repo kernel (Fallback initramfs)' --class arch --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-core repo kernel-fallback-194e65d3-b357-430d-b4bb-67a8300d287d' {
load_video
set gfxpayload=keep
insmod gzio
insmod part_msdos
insmod ext2
set root='hd0,msdos1'
if [ x$feature_platform_search_hint = xy ]; then
search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos1 --hint-efi=hd0,msdos1 --hint-baremetal=ahci0,msdos1 b69dee88-a8c9-4af7-a938-7ca6c8ff368c
else
search --no-floppy --fs-uuid --set=root b69dee88-a8c9-4af7-a938-7ca6c8ff368c
fi
echo 'Loading Linux core repo kernel ...'
linux /vmlinuz-linux root=/dev/mapper/VolGroup00-root ro quiet
echo 'Loading initial ramdisk ...'
initrd /initramfs-linux-fallback.img
### END /etc/grub.d/10_linux ###
### BEGIN /etc/grub.d/20_linux_xen ###
### END /etc/grub.d/20_linux_xen ###
### BEGIN /etc/grub.d/20_memtest86+ ###
### END /etc/grub.d/20_memtest86+ ###
### BEGIN /etc/grub.d/30_os-prober ###
### END /etc/grub.d/30_os-prober ###
### BEGIN /etc/grub.d/40_custom ###
# This file provides an easy way to add custom menu entries. Simply type the
# menu entries you want to add after this comment. Be careful not to change
# the 'exec tail' line above.
### END /etc/grub.d/40_custom ###
### BEGIN /etc/grub.d/41_custom ###
if [ -f ${config_directory}/custom.cfg ]; then
source ${config_directory}/custom.cfg
elif [ -z "${config_directory}" -a -f $prefix/custom.cfg ]; then
source $prefix/custom.cfg;
fi
### END /etc/grub.d/41_custom ###
Thank you in advance. If you need any more information, please let me know.
Regards,
javex
Last edited by javex (2012-08-10 18:33:41)Thank you for your reply. I looked further into the grub.cfg and removed the quiet part. Apparently the problem is that it runs all hooks but does not prompt me for a passphrase when running the encrypt hook. Why does this occur?
Edit: I solved this: apparently I forgot to specify a cryptdevice. Since the article about dm-crypt does not talk about GRUB2, I missed that. I will rework that section to specify GRUB2 and GRUB-Legacy
Last edited by javex (2012-08-10 18:33:22) -
Flex components is not showing up after deploying to web server
Hello,
I am working in flex builder 2.0 to develop a simple
application.
I am getting some issues around custom components.
My directory structure is as follows:
Project_Folder-
|-Components
|-bin
|-assets
|-Application.mxml
I put all my custom components in Components folder.
I have a custom component in there and the application.mxml
is using
it.
the code is somthign like this:
<mx:Application xmlns:mx="
http://www.adobe.com/2006/mxml"
layout="absolute" xmlns:test="Components.*"
backgroundGradientColors="[#000000, #000000]">
<test:baseLayer horizontalCenter="0" top="0"/>
</mx:Application>
Here base layer is the custom component.
My problem is when i ran it it is workign fine, but when I am
deploying it in the apache server teh Custom components are
not
showing up. I copied all contents of the bin folder to my web
server
root.
I am not sure if it will need some extra task to deploy the
custom
component?
I thought since it is in the same project the swf file
already packed
with the custom components.
Please can anyone help me to clarify that?
thank you,Thanks, this makes sense. Can you tell me how I upload the whole scripts
folder to justhost.com? When I try to select the folder at upload it opens
the folder as if I should then select a single file ( like the
AC_RunActiveContent.js file that is in the folder) So that is what I did.
Which makes sense that the path is now broken for the flash cause it is not
in the folder anymore. In other words what I don't understand is how to get
my scripts folder and its contents as a whole uploaded to the Justhost.com.
Thanks your email has been very helpful!
10/14/09 12:32 [email protected]
I did finally figure out the problem.
In my case, I was putting all of my files directly into justhost.com with no
regard for the way the files were classified in the original folders used to
build my site. In other words, you can't just put AC_RunActiveContent.js into
justhost.com randomly and expect that your Flash will work.
Instead, you have to create a Scripts folder and then put
AC_RunActiveContent.js within the scripts folder. This sets up the paths to
function the way they were originally created on your Dreamweaver document (or
whatever the html editor you may happen to be using).
Whether dealing with Flash or images, the main thing to remember is that you
have to mirror the structure of your original folders/files exactly.
So try creating a folder called Scripts, then place the AC_RunActiveContent.js
file inside of this Scripts folder, and see if that works. I assume the foder
AC_RunActiveContent.js is within Scripts, but if it called something
different, just copy the name (exactly), and put AC-RunActiveContet inside of
it.
I hope this helps,
sherwulff
> -
BPM process switch/fork is not processing!!
Hi Guys,
I have scenario for 1:n multi mapping using BPM. In my BPM just I have got 5 steps, the first one is receiver, next step is transformation (workinig fine), switch with 2 branches, on branch one Sender 1 to SNC (Prod Active Notific) and on second branch has got Sender 2 for (ProductDemandInfluencingEventNotification) to SNC. The process is coming upto Switch, the process is not coming into Swithch step to send these messages to sender? what is the condition for Switch step? I put it like (1=1), message is not processing completely.
Any valuable inputs for this please?
Many Thank in advance
SanHi biplab das ,
Many thanks for very quick reply. I have tried both. Let me explain you my scenario, please give me your inputs on that!!
Sender side xml messages comes with couple of messages. What I need this xml has to split into two message types "Product Activity Data" and "Product Influence Demand", each message goes with multiple messages, as follows:
<Root>
<output>
<output>
<output>
<output>
<output>
</Root>
Each output message got multiple submessags. Couple of submessages has go under "Product Activity Data" and Couple of other messages has go under "Product Influence Deman"
Mapping is working fine. Messages are splitting. I have configure BPM as follows:
Setp 1---- Receiver --- > Receive the message
Setp 2 -
Mapping -
> Messages are splitting
Step 3 -
Fork Step ---> with 3 Branches , Branch 1 for, 1 Receiver Determination and 1 Sender --> Product Acitivity Data
Branch 2 for , 2nd Receiver Determination and 2nd Sender --> Product Influence D
Branch 3 for throw Alert
I can this message is failing on Moni, I opened PE, the graphical represenation shows error on Sender1 and Sender2.
Please can you advice me in this.
Many Thanks
Regards
San -
Components are not displayed in COR6N
Hi experts,
Please help investigate this case:
We confirm process order using COR6N. In goods movement screen, only finished goods is displayed, all of its components are not.
Check material list in COR3, all components are marked "backflush". User could release process order as normal.
Could u suggest anything wrong in master data or process order itself that lead to this issue?
Many thanks,
Duc.Back flush is ticked means definitely, it will show in Goods movement screen in C011n.
in earlier confirmation if you check the clear resrvation tick in Co11n, hen final issue tick will appear
please check in component overview, final issue is checked or not -
MDS Customization Problem with multiple root components
Hello,
Oracle JDeveloper 11.1.1.5.0 and WebLogic Server Version: 10.3.5.0
I am using Oracle MDS Customization class with the following classes:
oracle.adf.view.rich.change.MDSDocumentChangeManager
org.apache.myfaces.trinidad.change.ChangeManager
org.apache.myfaces.trinidad.change.MoveChildComponentChange
Based on these classes I am moving UI components
<af:panelStretchLayout>
<f:facet name="center">
<af:region />
</f:facet>
</af:panelStretchLayout>
into another component thanks to the MDS.
Once I do the movement and persist, I receive the following waning message:
*<RegionRenderer> <encodeAll> The region component with id: pt1:r1:0:pt2:r1:0:r1 has detected a page fragment with multiple root components. Fragments with more than one root component may not display correctly in a region and may have a negative impact on performance. It is recommended that you restructure the page fragment to have a single root component.*
When I run the application in Oracle JDeveloper 11.1.1.5.0, it work fine, but when I deploy it on WebLogic Server Version: 10.3.5.0 it is not working.
I receive the same warning message on the server and in JDeveloper.
I think that MDS makes a copy of the components that I move because of that it says me that "a page fragment with multiple root components"
The problem is not because of duplication any of the tags <af:document> <f:view><f:form> <html> <head> <body>
After the movement and persistence, I also call the FacesContext to reset its view root with the following code:
FacesContext context =FacesContext.getCurrentInstance();
String currentView = context.getViewRoot().getViewId();
ViewHandler vh = context.getApplication().getViewHandler();
UIViewRoot x = vh.createView(context, currentView);
x.setViewId(currentView);
context.setViewRoot(x);
The idea is that the entire JSF tree to be rebuild on entering page with aim to clear previously drag and drop settings.
Do you know how I can overcome the problem on the server WebLogic Server Version: 10.3.5.0?
This worked on Oracle JDeveloper 11.1.1.3.0 and now the problem is with Oracle JDeveloper 11.1.1.5.0.
Regards,
NikiHi,
what the error means is that you have more than one node under the jsp:root node of the fragment. Say the content of your fragment is
<af:panelGroupLayout id="pgl1">
</af:PanelGrouPLayout>
Then this is okay
if you have
<af:panelGroupLayout id="pgl1">
</af:PanelGrouPLayout>
<af:panelGroupLayout id="pgl2">
</af:PanelGrouPLayout>
or
<af:panelGroupLayout id="pgl1">
</af:PanelGrouPLayout>
<af:commandButton id="cb1"/>
Then these mean duplicated root components. The JDeveloper IDE flags this as Warnings (Structure Window) when selecting the page fragment
Frank -
"detected a page fragment with multiple root components" warning
I am getting a warning on the standalone WLS when I run my page that contains a taskflow as region. I am using a page fragment in my taskflow.
<Warning> <oracle.adfinternal.view.faces.renderkit.rich.RegionRenderer> <ADF_FACES-60099> <The region component with id: ptMain:r1 has detected a page fragment with multiple root components. Fragments with more than one root component may not display correctly in a region and may have a negative impact on performance. It is recommended that you restructure the page fragment to have a single root component.>
The warning states the obvious, I have everything within a panelheader in my page fragment. Also, I do not get the warning on the integrated WLS. Any ideas as to why this warning is still popping up in the log? I am using JDev 11.1.1.3.
Thanks,
JessicaThank you for responding. I do not have any popups. I do, however, have another region nested within this fragment ( have this warning on another fragment that doesn't have a nested region though). Here is the code for my fragment.
<?xml version='1.0' encoding='UTF-8'?>
<jsp:root xmlns:jsp="http://java.sun.com/JSP/Page" version="2.1"
xmlns:af="http://xmlns.oracle.com/adf/faces/rich"
xmlns:f="http://java.sun.com/jsf/core">
<af:panelHeader text="Pawn"
binding="#{backingBeanScope.backing_Fragments_PawnSearch.ph1}"
id="ph1" type="default">
<af:panelFormLayout id="pfl2">
<af:panelSplitter binding="#{backingBeanScope.backing_Fragments_PawnSearch.ps1}"
id="ps1" orientation="vertical" splitterPosition="62"
inlineStyle="width:775px; height:660px;">
<f:facet name="first">
<af:panelBox text="Search #{bindings.agency.inputValue} Data to Update"
binding="#{backingBeanScope.backing_Fragments_PawnSearch.pb1}"
id="pb1">
<f:facet name="toolbar"/>
<af:panelGroupLayout id="pgl3" layout="horizontal">
<af:inputText value="#{bindings.control_number.inputValue}"
label="Control Number" required="true"
columns="#{bindings.control_number.hints.displayWidth}"
maximumLength="#{bindings.control_number.hints.precision}"
shortDesc="#{bindings.control_number.hints.tooltip}"
id="it1">
<f:validator binding="#{bindings.control_number.validator}"/>
</af:inputText>
<af:inputDate value="#{bindings.trans_date.inputValue}"
label="Date" required="true"
shortDesc="#{bindings.trans_date.hints.tooltip}"
id="id1">
<f:validator binding="#{bindings.trans_date.validator}"/>
<af:convertDateTime pattern="#{bindings.trans_date.format}"/>
</af:inputDate>
<af:inputText value="#{bindings.agency.inputValue}" simple="true"
required="#{bindings.agency.hints.mandatory}"
columns="#{bindings.agency.hints.displayWidth}"
maximumLength="#{bindings.agency.hints.precision}"
shortDesc="#{bindings.agency.hints.tooltip}"
binding="#{backingBeanScope.backing_Fragments_PawnSearch.it2}"
id="it2" visible="false">
<f:validator binding="#{bindings.agency.validator}"/>
</af:inputText>
<af:commandButton actionListener="#{bindings.ExecuteWithParams.execute}"
text="Search"
disabled="#{!bindings.ExecuteWithParams.enabled}"
id="cb5"
returnListener="#{backingBeanScope.backing_Fragments_PawnSearch.refreshPage}"
action="#{backingBeanScope.backing_Fragments_PawnSearch.RenderMe}">
<af:setActionListener from="#{bindings.PawnItemView1Iterator.currentRowKeyString}"
to="#{requestScope.pawnkey}"/>
</af:commandButton>
<af:spacer width="10" height="10"
binding="#{backingBeanScope.backing_Fragments_PawnSearch.s1}"
id="s1"/>
<af:goButton text="Clear Values and Create New" id="gb1"
destination="index.jspx"
rendered="#{backingBeanScope.backing_Fragments_PawnSearch.saveButtonRendered}"/>
</af:panelGroupLayout>
</af:panelBox>
</f:facet>
<f:facet name="second">
<af:panelGroupLayout binding="#{backingBeanScope.backing_Fragments_PawnSearch.pgl4}"
id="pgl4" layout="scroll" partialTriggers=""
visible="true">
<af:panelGroupLayout binding="#{backingBeanScope.backing_Fragments_PawnSearch.pgl6}"
id="pgl6" inlineStyle="width:775px;"
visible="#{backingBeanScope.backing_Fragments_PawnSearch.renderTF}">
<af:region value="#{bindings.PawnEntryFormTF1.regionModel}"
id="r1" inlineStyle="width:750px;"/>
</af:panelGroupLayout>
<af:panelGroupLayout binding="#{backingBeanScope.backing_Fragments_PawnSearch.pgl5}"
id="pgl5" layout="horizontal"
visible="#{backingBeanScope.backing_Fragments_PawnSearch.renderMessage}">
<af:outputFormatted value="No #{bindings.agency.inputValue} Pawn data matching the Control Number and Transaction Date from above."
binding="#{backingBeanScope.backing_Fragments_PawnSearch.of1}"
id="of1"
inlineStyle="font-weight:bolder; font-size:small;"/>
<af:spacer width="10" height="10"
binding="#{backingBeanScope.backing_Fragments_PawnSearch.s2}"
id="s2"/>
<af:goButton text="Clear Search and Start Again"
binding="#{backingBeanScope.backing_Fragments_PawnSearch.gb2}"
id="gb2" destination="index.jspx"/>
</af:panelGroupLayout>
</af:panelGroupLayout>
</f:facet>
</af:panelSplitter>
</af:panelFormLayout>
</af:panelHeader>
<!--oracle-jdev-comment:auto-binding-backing-bean-name:backing_Fragments_PawnSearch-->
</jsp:root> -
JSP 2.0 JSLT not processing defined context - Tomcat 5.0.19
Hi,
I am using tomcat 5 for my development. If I define the context in server.xml JSLT tag is not processing. I added jslt.jar and standard.jar file in common/lib direcotry.
I have a simple jsp file p.jsp
======================
p.jsp
<%@ taglib prefix="c" uri="http://java.sun.com/jsp/jstl/core" %>
<%@ taglib prefix="sql" uri="http://java.sun.com/jsp/jstl/sql" %>
JSLT Test
<br>
<c:out value="${order.amount + 5}"/>
Test 1.
I copied p.jsp into webapps/ROOT directory and started tomcat. Using webbrowser I accessed p.jsp.
http://localhost:8080/p.jsp
output:
======
JSLT Test
${order.amount + 5}
Test 2:
I created new context in server.xml file like this:
<Context path="/test" docBase="test"
debug="5" reloadable="true" crossContext="true"/>
copied p.jsp file into webapps/test direcotry.
I got the following ouput:
http://localhost:8080/test/p.jsp
output:
======
JSLT Test
${order.amount + 5}
Test 3.
Now I didn't define any context. created directory under webapp/nocontext and copied p.jsp file.
JSLT Test
${order.amount + 5}
Any idea why this page isnot processing. I looking ouput
JSLT Test 5.
SRHi all,
Just I found the problem. My web.xml file header is worng. The following web.xml is working.
<web-app xmlns="http://java.sun.com/xml/ns/j2ee"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://java.sun.com/xml/ns/j2ee web-app_2_4.xsd"
version="2.4">
<display-name>Welcome to Tomcat</display-name>
<description>
Welcome to Tomcat
</description>
</web-app>
http://localhost:8080/test/p.jsp
output:
======
JSLT Test
5 -
Expense reports are not processing for future date?
hi folks,
in our co.expense reports are not processing for future date, even though the end date is one month from now the expense reports are not getting paid.
can anybody assist me to come out of this issue.
thanks in advance,
regards
bhanuHI,
Could you please share how to go Debug mode in Dymanic program, I have scenarion in SAP HR , when Employee is hire , during the hiring action Infotype 32 is updating based on following conditions
********INSERT IT0032 FOR CANDIDATE ID *************
P0001-PERSG<>'5' - True
P0000-MASSN='01'/X
P0000-MASSN='H1'/X
P0000-MASSN<>'84'/X
P0000-MASSN<>'86'/X
P0000-MASSN<>'99'/X
GET_PNALT(ZHR_CREATEP0032)
RP50D-FIELDS <> ''
INS,0032,,,(P0000-BEGDA),(P0000-ENDDA)/D
P0032-PNALT=RP50D-FIELD1
P0032-ZZ_COMPID='05' _ True
Infotype record is being created but there is no data in "RP50D-FIELD1 , so i tried to debug the subroutine GET_PNALT(ZHR_CREATEP0032) as mention in Dynamic action, its not going in debugger mode.
Do you have any Idea about how to debug the program. used in dynamic action
Regards,
Madhu -
WLSOAPFaultException: MustUnderstand header not processed
WLSOAPFaultException: MustUnderstand header not processed '{http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd}Security'
I have deployed my BSSV to localhost WLS successfully and I can call the published BSSV method from admin console. The results are coming correctly.
Then, In another machine I created a proxy for the above WLS and everything looked fine. Proxy generation was successfull.
Now here is my code:
InventoryManagerPortClient client = new InventoryManagerPortClient();
client.addUNTCredentialProvider("weblogic", "weblogic1");
client.setPortCredentialProviderList();
((Stub)client.getPort())._setProperty(Stub.USERNAME_PROPERTY,"weblogic");
((Stub)client.getPort())._setProperty(Stub.PASSWORD_PROPERTY,
"weblogic1");
((Stub)client.getPort())._setProperty(Stub.ENDPOINT_ADDRESS_PROPERTY, "http://10.139.153.143:7101/context-root-JP410000/" + "InventoryManagerPort");
client.publishedBSSVmethod();
here is the error I'm getting:
java.rmi.RemoteException: SOAPFaultException - FaultCode [{http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd}MustUnderstand] FaultString [MustUnderstand header not processed '{http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd}Security'] FaultActor [null]No Detail; nested exception is:
weblogic.wsee.jaxrpc.soapfault.WLSOAPFaultException: MustUnderstand header not processed '{http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd}Security'
at inventorymanager.InventoryManager_Stub.getItemPrice(InventoryManager_Stub.java:139)
at inventorymanager.InventoryManagerPortClient.getItemPrice(InventoryManagerPortClient.java:180)
Please suggest me to resolve this..
Thanks in advance...Is there any solution for this problem .??
Please suggest.Its Urgent!!!
Thanks.. -
Hi all,
Having a bit of a nightmare with Hyper-V on my Windows 8 Pro laptop - Whenever I create a new VM and try to start it I receive the following error;
'Failed to start the virtual machine 'test' because one of the Hyper-V components is not running.'
I sometimes also receive an error saying something along the lines of 'Unable to change virtual machine state'
I have done a lot of searching a seen two common answers - The first is to try removing the Hyper-V role and re-adding it - I have tried this several times to no avail. (Intel VT and all virtualisation capabilities are enabled in BIOS). The second issue
it would seem
some people have had relates to editing the VMX configuration file and adding the line
hypervisor.cpuid.v0 = "FALSE" - I thought VMX files were only present in VMware Virtual Machines...
Any help would be greatly appreciated.
Many thanks in advance.
JamesThe hypervisor event log message is generated only at boot - so this would be expected. Also vmbus should not be running on the host (I was looking at a VM yesterday)... This error message is generated only when the vid is determined not to be
running or cannot communicate with the hypervisor.
I have seen this due to anti-virus software in the past or due to driver verifier being configured. Since it looks like you checked driver verifier already do you have anti-virus software installed? If so have you followed the best practices
and exempted the Hyper-v services?
http://technet.microsoft.com/en-us/library/dd283088(WS.10).aspx
To resolve this problem, configure the real-time scanning component within your antivirus software to exclude the following directories and files:
Default virtual machine configuration directory (C:\ProgramData\Microsoft\Windows\Hyper-V)
Custom virtual machine configuration directories
Default virtual hard disk drive directory (C:\Users\Public\Documents\Hyper-V\Virtual Hard Disks)
Custom virtual hard disk drive directories
Custom replication data directories, if you are using Hyper-V Replica
Snapshot directories
Vmms.exe (Note: This file may have to be configured as a process exclusion within the antivirus software.)
Vmwp.exe (Note: This file may have to be configured as a process exclusion within the antivirus software.)
-Taylor Brown<br/> -Program Manager, Hyper-V<br/> -http://blogs.msdn.com/taylorb -
Root certificate is not trusted
Hi!
I have installed the internatlly signed certificates according to steps in the Oracle documentation, however, I still ge the error that "This CA Root certificate is not trusted. To enable trust, install this certificate in the Trusted Root Certification Authorities store".
Below is the error I receive when starting UCM server:
<27-Dec-2011 13:39:18 o'clock CET> <Notice> <Security> <BEA-090898> <Ignoring th
e trusted CA certificate "CN=VeriSign Universal Root Certification Authority,OU=
(c) 2008 VeriSign\, Inc. - For authorized use only,OU=VeriSign Trust Network,O=V
eriSign\, Inc.,C=US". The loading of the trusted certificate list raised a certi
ficate parsing exception PKIX: Unsupported OID in the AlgorithmIdentifier object
: 1.2.840.113549.1.1.11.>
I get this error when I click on the certificate in the browser. Below are the steps I performed. Can anyone help me understand, perhaps, I import my certificates incorrectly?
1. I've created a custom keystore using the following command:
keytool -genkey -alias mykey -keyalg RSA -keysize 2048 -dname “CN=<domain name like test.com etc>, OU=<unite like Customer Support etc>, O=<your organization>, L=<your location>, ST=<state>, C=<country code like US>” -keystore identity.jks
2. Next, I generated a certificate sign-in request using this command:
keytool -certreq -alias mykey -file cert.csr -keystore identity.jks
3. After I received three certificates signed in by our internatl authority, main, intermediate, root. I imported each one of them.
4. I inserted those one by one into my custom store generated during step1 first. I used the following command for each certificate:
keytool -import -trustcacerts -keystore mystore.jks -storepass password -alias Root -import -file Trustedcaroot.txt
5.I also inserted all three into JAVA_HOME cacerts file, located on C:/Program Files/Java/jrockit.../jre/lib/security/cacerts using the same command as in step 4.
Next, I configured UCM_server 1 KEYSTORE to use Custom Identity and Java Trust. and pointed Custom Identity to my custom keystore file created in step1 and Java Trust to cacerts file updated in step5.
Despite of all steps above I cannot get the certificates to work. When I look at the certificate, it tells me that "This CA Root certificate is not turested. To enable trust, install this certificate in the Trusted Root Certification Authorities store".
Edited by: 867498 on 27-Dec-2011 05:45I've managed to get rid of the error, however the certificate still does not reflect the trusted chain and doesn't point to the "Root" certificate. Any ideas?
Maybe you are looking for
-
How to use video camer as Webcam?
I have a Sony PD150 and want to use it as a webcam. Does anybody know how to do this? Can this be accomplished with some free software? Thanks chephoto
-
Looking to get all my photos from my mac's photo stream on to my iphone's photostream.
-
Can a 2006 iMac with an Intel chip run OS X Mavericks?
My daughter has a 2006 iMac with an Intel chip and 3 gigs of RAM. She's been told it can't run Mavericks. Is this true? Or, is there something missing from her configuration we can add to enable Mavericks? Thanks!
-
Access field of view in web dynpro
Hi all, I'm developing a web dynpro. In a view I need to fill automatically a field when I press a button. I'm new in web dynpro and I don't know good the use of standard attriutes and methods of wd (like wdThis, wdContext, etc.) regards enzo
-
Help I have a powerbook G4. I was using it when I heard a pop. I noticed that the power light on the power connecter turned off and my charing icon turned off. the computer continued to run until the battery died. I even turned it off and on several