Issue MM PO Archiving
Hello all,
we had an issue in archiving Purchase Orders.
Frist of all, we have a large number of open purchase orders. Means that the "delivery complete" flag is missing and the "final invoice" flag is missing.
These PO´s cant be deleted.
So these flags have to be set. We created a report which uses ME22N and sets those flags. After this, our issue appeared.
Unfortenally many purchase orders created new messages and send them directly to the vendor. Had a lot of trouble with this.
How can this happen? Why is this flag a indicator for message determination?
How can i prevent this?
I coud set the flags directly on the DB, but our company doesn´t want to make updates direclty on db.
Any ideas?
thanks for help
Hi Christian,
Other idea. If you don't want to change any customizing, you can create your own Outputs routine numbers in VOFM to rule this requirement. In SPRO/ Mat.managament / purchasing / messages / output control / accesses sequence, you can set it and avoid to create the message. See SAP Note 39462 - Expand field catalog in message determination for purchasing to create a new field (for instance, date of PO). You can set a condition to set SY-SUBRC = 4 (ie: output not created) if the difference in days is greater than n days.
I hope this helps you
Regards
Eduardo
Similar Messages
-
Field catalog issue in Data archiving
Hi Experts,
We are facing issue in SAP CRM Data Archiving info structure. I have created the fieldcatlog for the service Ticket( CRM_SERORD archive object) and Act On ( CRM_ACT_ON archive Object).
in our scenario, multiple Business partner(Consumer, orgnisational and employee) are attached to each Service ticket, in my ZARxx tables are storing the one entry means one service ticket and one Business partner.
My requirment is it should show all the business partners for each service ticket.
Could you please help me on this.
Regards,
SriniHi Srini,
We have the same problem for our project. Did you find a solution for this problem ?
Best regards,
Nevets -
OBIEE Dev VM Install Issue:Cannot open archive "ARCHIVE.zip" as archive
All,
I downloaded all eleven files for the OBIEE 11.1.1.6.2 BP1 - Sample Application (V207).
I checked the MD5 Sum of the first two files and the MD5 Sums match, however when I use 7zip to extract the files, I get an error that says: Cannot open archive "ARCHIVE.zip" as archive
Has anyone encountered this issue? I verified that the files are not read-only and that they are not "blocked" by Windows.
I am running 64-bit Windows 7 Home Edition on a Dell laptop.
I encountered this error on two separate archives with correct MD5 Sums.
Any help would be appreciated! Thanks!
NathanHi,
I got the same problem. Have you found any solution for this? Thanks. -
HELP - PULL-DOWN ISSUE WORKING WITH ARCHIVAL FILM IN 24P
Bear with me here... I'm working on a historical doc for a local PBS station. Camera footage was originated using the HDX 200 in 720p-24n. Much of the program uses old 16mm film that's been xfer'd to an uncompressed video file using the 16mm "Sniper" (a psuedo-telecine transfer system. The system scans the film frames into a pc one at a time and stores the images as a single movie file using a proprietary codec). You can tell the Sniper's software what frame rate the original film footage was shot at - i.e. 18fps, 24fps and so-on. However, when you export it back out it only outputs to a 29.97 interlaced file.
Here's the problem: The film I'm working with was originated at 18fps - not 24. But, I'm working with a 24p sequence. I need to convert the footage so that it plays cleanly in a 24p environment. But I can't run it through CT and do a reverse telecine to remove the pull-down because that's designed for footage that was originated at 24fps only. When I try to do it on footage that was shot at 18fps, I end up with a wierd flashing/banding thing as it tries to apply a 24 frame pull-down to an 18 frame image.
I'm thinking that there has to be a plug-in either for FCP, CT or for the Sniper that can work a pull-down from 18fps film to 24p video.
Anyone have any ideas?
Tim Walton
KVIE Public TelevisionYeah, I thoought of that. But I kinda wanted to keep the program in native 24p if I could.
If I did it that way, wouldn't I have to go ahead and edit the program in 24p, then export it out at 29.97, then re-import it into a 29.97 Sequence and finally add the archive material? -
Capture process issue...archive log missing!!!!!
Hi,
Oracle Streams capture process is alternating between INITIALIZING and DICTIONARY INITIALIZATION state and not proceeding after this state to capture updates made on table.
we have accidentally missing archivelogs and no backup archive logs.
Now I am going to recreate the capture process again.
How I can start the the capture process from new SCN ?
And Waht is the batter way to remove the archive log files from central server, because
SCN used by capture processes?
Thanks,
Faziarain
Edited by: [email protected] on Aug 12, 2009 12:27 AMUsing dbms_Streams_Adm to add a capture, perform also a dbms_capture_adm.build. You will see in v$archived_log at the column dictionary_begin a 'yes', which means that the first_change# of this archivelog is first suitable SCN for starting capture.
'rman' is the prefered way in 10g+ to remove the archives as it is aware of streams constraints. If you can't use rman to purge the archives, then you need to check the min required SCN in your system by script and act accordingly.
Since 10g, I recommend to use rman, but nevertheless, here is the script I made in 9i in the old time were rman was eating the archives needed by Streams with appetite.
#!/usr/bin/ksh
# program : watch_arc.sh
# purpose : check your archive directory and if actual percentage is > MAX_PERC
# then undertake the action coded by -a param
# Author : Bernard Polarski
# Date : 01-08-2000
# 12-09-2005 : added option -s MAX_SIZE
# 20-11-2005 : added option -f to check if an archive is applied on data guard site before deleting it
# 20-12-2005 : added option -z to check if an archive is still needed by logminer in a streams operation
# set -xv
#--------------------------- default values if not defined --------------
# put here default values if you don't want to code then at run time
MAX_PERC=85
ARC_DIR=
ACTION=
LOG=/tmp/watch_arch.log
EXT_ARC=
PART=2
#------------------------- Function section -----------------------------
get_perc_occup()
cd $ARC_DIR
if [ $MAX_SIZE -gt 0 ];then
# size is given in mb, we calculate all in K
TOTAL_DISK=`expr $MAX_SIZE \* 1024`
USED=`du -ks . | tail -1| awk '{print $1}'` # in Kb!
else
USED=`df -k . | tail -1| awk '{print $3}'` # in Kb!
if [ `uname -a | awk '{print $1}'` = HP-UX ] ;then
TOTAL_DISK=`df -b . | cut -f2 -d: | awk '{print $1}'`
elif [ `uname -s` = AIX ] ;then
TOTAL_DISK=`df -k . | tail -1| awk '{print $2}'`
elif [ `uname -s` = ReliantUNIX-N ] ;then
TOTAL_DISK=`df -k . | tail -1| awk '{print $2}'`
else
# works on Sun
TOTAL_DISK=`df -b . | sed '/avail/d' | awk '{print $2}'`
fi
fi
USED100=`expr $USED \* 100`
USG_PERC=`expr $USED100 / $TOTAL_DISK`
echo $USG_PERC
#------------------------ Main process ------------------------------------------
usage()
cat <<EOF
Usage : watch_arc.sh -h
watch_arc.sh -p <MAX_PERC> -e <EXTENTION> -l -d -m <TARGET_DIR> -r <PART>
-t <ARCHIVE_DIR> -c <gzip|compress> -v <LOGFILE>
-s <MAX_SIZE (meg)> -i <SID> -g -f
Note :
-c compress file after move using either compress or gzip (if available)
if -c is given without -m then file will be compressed in ARCHIVE DIR
-d Delete selected files
-e Extention of files to be processed
-f Check if log has been applied, required -i <sid> and -g if v8
-g Version 8 (use svrmgrl instead of sqlplus /
-i Oracle SID
-l List file that will be processing using -d or -m
-h help
-m move file to TARGET_DIR
-p Max percentage above wich action is triggered.
Actions are of type -l, -d or -m
-t ARCHIVE_DIR
-s Perform action if size of target dir is bigger than MAX_SIZE (meg)
-v report action performed in LOGFILE
-r Part of files that will be affected by action :
2=half, 3=a third, 4=a quater .... [ default=2 ]
-z Check if log is still needed by logminer (used in streams),
it requires -i <sid> and also -g for Oracle 8i
This program list, delete or move half of all file whose extention is given [ or default 'arc']
It check the size of the archive directory and if the percentage occupancy is above the given limit
then it performs the action on the half older files.
How to use this prg :
run this file from the crontab, say, each hour.
example
1) Delete archive that is sharing common arch disk, when you are at 85% of 2500 mega perform delete half of the files
whose extention is 'arc' using default affected file (default is -r 2)
0,30 * * * * /usr/local/bin/watch_arc.sh -e arc -t /arc/POLDEV -s 2500 -p 85 -d -v /var/tmp/watch_arc.POLDEV.log
2) Delete archive that is sharing common disk with oother DB in /archive, act when 90% of 140G, affect by deleting
a quater of all files (-r 4) whose extention is 'dbf' but connect before as sysdba in POLDEV db (-i) if they are
applied (-f is a dataguard option)
watch_arc.sh -e dbf -t /archive/standby/CITSPRD -s 140000 -p 90 -d -f -i POLDEV -r 4 -v /tmp/watch_arc.POLDEV.log
3) Delete archive of DB POLDEV when it reaches 75% affect 1/3 third of files, but connect in DB to check if
logminer do not need this archive (-z). this is usefull in 9iR2 when using Rman as rman do not support delete input
in connection to Logminer.
watch_arc.sh -e arc -t /archive/standby/CITSPRD -p 75 -d -z -i POLDEV -r 3 -v /tmp/watch_arc.POLDEV.log
EOF
#------------------------- Function section -----------------------------
if [ "x-$1" = "x-" ];then
usage
exit
fi
MAX_SIZE=-1 # disable this feature if it is not specificaly selected
while getopts c:e:p:m:r:s:i:t:v:dhlfgz ARG
do
case $ARG in
e ) EXT_ARC=$OPTARG ;;
f ) CHECK_APPLIED=YES ;;
g ) VERSION8=TRUE;;
i ) ORACLE_SID=$OPTARG;;
h ) usage
exit ;;
c ) COMPRESS_PRG=$OPTARG ;;
p ) MAX_PERC=$OPTARG ;;
d ) ACTION=delete ;;
l ) ACTION=list ;;
m ) ACTION=move
TARGET_DIR=$OPTARG
if [ ! -d $TARGET_DIR ] ;then
echo "Dir $TARGET_DIR does not exits"
exit
fi;;
r) PART=$OPTARG ;;
s) MAX_SIZE=$OPTARG ;;
t) ARC_DIR=$OPTARG ;;
v) VERBOSE=TRUE
LOG=$OPTARG
if [ ! -f $LOG ];then
> $LOG
fi ;;
z) LOGMINER=TRUE;;
esac
done
if [ "x-$ARC_DIR" = "x-" ];then
echo "NO ARC_DIR : aborting"
exit
fi
if [ "x-$EXT_ARC" = "x-" ];then
echo "NO EXT_ARC : aborting"
exit
fi
if [ "x-$ACTION" = "x-" ];then
echo "NO ACTION : aborting"
exit
fi
if [ ! "x-$COMPRESS_PRG" = "x-" ];then
if [ ! "x-$ACTION" = "x-move" ];then
ACTION=compress
fi
fi
if [ "$CHECK_APPLIED" = "YES" ];then
if [ -n "$ORACLE_SID" ];then
export PATH=$PATH:/usr/local/bin
export ORAENV_ASK=NO
export ORACLE_SID=$ORACLE_SID
. /usr/local/bin/oraenv
fi
if [ "$VERSION8" = "TRUE" ];then
ret=`svrmgrl <<EOF
connect internal
select max(sequence#) from v\\$log_history ;
EOF`
LAST_APPLIED=`echo $ret | sed 's/.*------ \([^ ][^ ]* \).*/\1/' | awk '{print $1}'`
else
ret=`sqlplus -s '/ as sysdba' <<EOF
set pagesize 0 head off pause off
select max(SEQUENCE#) FROM V\\$ARCHIVED_LOG where applied = 'YES';
EOF`
LAST_APPLIED=`echo $ret | awk '{print $1}'`
fi
elif [ "$LOGMINER" = "TRUE" ];then
if [ -n "$ORACLE_SID" ];then
export PATH=$PATH:/usr/local/bin
export ORAENV_ASK=NO
export ORACLE_SID=$ORACLE_SID
. /usr/local/bin/oraenv
fi
var=`sqlplus -s '/ as sysdba' <<EOF
set pagesize 0 head off pause off serveroutput on
DECLARE
hScn number := 0;
lScn number := 0;
sScn number;
ascn number;
alog varchar2(1000);
begin
select min(start_scn), min(applied_scn) into sScn, ascn from dba_capture ;
DBMS_OUTPUT.ENABLE(2000);
for cr in (select distinct(a.ckpt_scn)
from system.logmnr_restart_ckpt\\$ a
where a.ckpt_scn <= ascn and a.valid = 1
and exists (select * from system.logmnr_log\\$ l
where a.ckpt_scn between l.first_change# and l.next_change#)
order by a.ckpt_scn desc)
loop
if (hScn = 0) then
hScn := cr.ckpt_scn;
else
lScn := cr.ckpt_scn;
exit;
end if;
end loop;
if lScn = 0 then
lScn := sScn;
end if;
select min(sequence#) into alog from v\\$archived_log where lScn between first_change# and next_change#;
dbms_output.put_line(alog);
end;
EOF`
# if there are no mandatory keep archive, instead of a number we just get the "PLS/SQL successfull"
ret=`echo $var | awk '{print $1}'`
if [ ! "$ret" = "PL/SQL" ];then
LAST_APPLIED=$ret
else
unset LOGMINER
fi
fi
PERC_NOW=`get_perc_occup`
if [ $PERC_NOW -gt $MAX_PERC ];then
cd $ARC_DIR
cpt=`ls -tr *.$EXT_ARC | wc -w`
if [ ! "x-$cpt" = "x-" ];then
MID=`expr $cpt / $PART`
cpt=0
ls -tr *.$EXT_ARC |while read ARC
do
cpt=`expr $cpt + 1`
if [ $cpt -gt $MID ];then
break
fi
if [ "$CHECK_APPLIED" = "YES" -o "$LOGMINER" = "TRUE" ];then
VAR=`echo $ARC | sed 's/.*_\([0-9][0-9]*\)\..*/\1/' | sed 's/[^0-9][^0-9].*//'`
if [ $VAR -gt $LAST_APPLIED ];then
continue
fi
fi
case $ACTION in
'compress' ) $COMPRESS_PRG $ARC_DIR/$ARC
if [ "x-$VERBOSE" = "x-TRUE" ];then
echo " `date +%d-%m-%Y' '%H:%M` : $ARC compressed using $COMPRESS_PRG" >> $LOG
fi ;;
'delete' ) rm $ARC_DIR/$ARC
if [ "x-$VERBOSE" = "x-TRUE" ];then
echo " `date +%d-%m-%Y' '%H:%M` : $ARC deleted" >> $LOG
fi ;;
'list' ) ls -l $ARC_DIR/$ARC ;;
'move' ) mv $ARC_DIR/$ARC $TARGET_DIR
if [ ! "x-$COMPRESS_PRG" = "x-" ];then
$COMPRESS_PRG $TARGET_DIR/$ARC
if [ "x-$VERBOSE" = "x-TRUE" ];then
echo " `date +%d-%m-%Y' '%H:%M` : $ARC moved to $TARGET_DIR and compressed" >> $LOG
fi
else
if [ "x-$VERBOSE" = "x-TRUE" ];then
echo " `date +%d-%m-%Y' '%H:%M` : $ARC moved to $TARGET_DIR" >> $LOG
fi
fi ;;
esac
done
else
echo "Warning : The filesystem is not full due to archive logs !"
exit
fi
elif [ "x-$VERBOSE" = "x-TRUE" ];then
echo "Nothing to do at `date +%d-%m-%Y' '%H:%M`" >> $LOG
fi -
Issue about GP Archive&Delete functionality
Hi,
When using Guided Procedures in the Portal, how can I found out were the GP Archive&Delete functionality (to clean up old proccess instances) is porformed?
Cheers,
FernandoHi,
If you are looking for the tool to perform Archiving and Deletion management of GP processes, then below would he help link for your reference.
[http://help.sap.com/saphelp_nw70/helpdata/en/a4/114a42a597b430e10000000a155106/frameset.htm]
If this is not what you are looking for then kindly provide more details on your query.
Thanks & Regards,
Swapna Priya. -
Hi all,
I wrote class that unpack zip archive, but it took so much time.
I needed to unpack archive more faster.
Could someone help me?try {
final ZipInputStream zis = new ZipInputStream(new BufferedInputStream(new FileInputStream(archive)));
try {
ZipEntry ze;
while ((ze = zis.getNextEntry()) != null) {
final String filename = ze.getName();
final File f = new File(dir, filename);
if (ze.isDirectory()) {
if (!f.exists()) {
f.mkdirs();
} else {
if (f.getParent() != null) {
final File parent = new File(f.getParent());
if (!parent.exists()) {
parent.mkdirs();
int count;
final byte data[] = new byte[2048];
BufferedOutputStream out = new BufferedOutputStream(new FileOutputStream(f), 2048);
try {
while ((count = zis.read(data, 0, 2048)) != -1) {
out.write(data, 0, count);
} finally {
out.flush();
out.close();
zis.closeEntry();
} finally {
zis.close();
} catch (Exception e) {
e.printStackTrace();
its my code.
There are a lot of files... -
Burning a project issued from eyeTV archive
After exportation to iDVD and trying to burn, I have a warning : "errors during project validation, you must solve before burning. At this step, I cannot find any solution ????
thksLikewise. I agree iDvd isn't a practical general burn utility. It simply was never written as such. Even though it can be done with iDvd, it really isn't intended for burning recorded TV programs; which is what EyeTV is written and designed to do.
Disclaimer: Apple does not necessarily endorse any suggestions, solutions, or third-party software / products that may be mentioned in this topic. Apple encourages you to first seek a solution at Apple Support. The following links are provided as is, with no guarantee of the effectiveness or reliability of the information. Apple does not guarantee that these links will be maintained or functional at any given time. Use the information above at your own discretion. -
ARCHIVE LOGS CREATED in WRONG FOLDER
Hello,
I'm facing an issue with the Archive logs.
In my Db the parameters for Archive logs are
log_archive_dest_1 string LOCATION=/u03/archive/SIEB MANDATORY REOPEN=30
db_create_file_dest string /u01/oradata/SIEB/dbf
db_create_online_log_dest_1 string /u01/oradata/SIEB/rdo
But the archive logs are created in
/u01/app/oracle/product/9.2.0.6/dbs
Listed Below :
bash-2.05$ ls -lrt *.arc
-rw-r----- 1 oracle dba 9424384 Jan 9 09:30 SIEB_302843.arc
-rw-r----- 1 oracle dba 7678464 Jan 9 10:00 SIEB_302844.arc
-rw-r----- 1 oracle dba 1536 Jan 9 10:00 SIEB_302845.arc
-rw-r----- 1 oracle dba 20480 Jan 9 10:00 SIEB_302846.arc
-rw-r----- 1 oracle dba 10010624 Jan 9 10:30 SIEB_302847.arc
-rw-r----- 1 oracle dba 104858112 Jan 9 10:58 SIEB_302848.arc
bash-2.05$
Does anyone have an Idea why this happens?
Is this a Bug!!!
ThxsBut in another Db I've
log_archive_dest string
log_archive_dest_1 string LOCATION=/u03/archive/SIEB MANDATORY REOPEN=30
and my archivelogs are in
oracle@srvsdbs7p01:/u03/archive/SIEB/ [SIEB] ls -lrt /u03/archive/SIEB
total 297696
-rw-r----- 1 oracle dba 10010624 Jan 9 10:30 SIEB_302847.arc
-rw-r----- 1 oracle dba 21573632 Jan 9 11:00 SIEB_302848.arc
-rw-r----- 1 oracle dba 101450240 Jan 9 11:30 SIEB_302849.arc
-rw-r----- 1 oracle dba 6308864 Jan 9 12:00 SIEB_302850.arc
-rw-r----- 1 oracle dba 12936704 Jan 9 12:30 SIEB_302851.arc
oracle@srvsdbs7p01:/u03/archive/SIEB/ [SIEB] -
What is the best way to archive large mailboxes
I am using Mail on Mavericks 10.9.5 with my corporate mail (Exchange 2007). I am getting issues trying to archive my email, and remove it from the Exchange Server. I usually divide mail into years, e.g mailbox for 2012, 13 etc.
I have 2 years of mail with about 14000 messages, and I want to migrate it off. I have taken the usual steps of creating a new mail box local to my mac and moving mail this way. However, unless I pick a very small no of messages, this method crashes mail. I have tried this over a few different versions of Mail, 10.8.x and 10.9.x and I get the same result.
Is there a way to just move the current exchange mailbox on my mac, from a live state to a archive state at the terminal or cmd line level?
If I can do this, I will then archive that mail from the Exchange server such that I start with a fresh/empty mailbox.
Thanks
Niall
13" MBAir mid 2012
8GB Ram
256GB Storage (156GB free).Hi,
Are you asking with regards to on-premises Exchange? With Microsoft Online SaaS services (aka Exchange Online) there is no control and no need to control which data center a mailbox resides in.
With regard to on-premises Exchange, you have two choices: you can move it over the WAN in which case you would either do a native mailbox move (assuming you have Exchange 2010 or later you can suspend the move after the copy so you can control the
time of the cutover) or create a database copy in the second data center and once the database copies have synchronized change the active copy.
The other choice is to move is out of band which would usually involve an offline seed of the database (you could conceivably move via PST file but that would disrupt access to the mailbox and is not really the 'best way').
In general, Exchange on-premises questions are best asked on the Exchange forum: http://social.technet.microsoft.com/Forums/office/en-US/home?category=exchangeserver
Thanks,
Guy -
Saved search issue with custom link
Hi all,
Generally we do have one opportunity search . But according to my requirement I have added one more opportunity search for separate Opportunity type .
I do have 2 different view configurations for regular search/result and newly added custom search/result.
here the issue is with Archived Saved Search( those are saved in regular Opportunity Search) is not working properly when I am open custom Search and then open the saved Search( from Home Page) . Here again I need to open the regular opportunity Search and execute the Saved Search then it is working.
When I come out from custom search/result, then execute the Saved Search it is calling the configuration of Custom Search/result.
here I need to destroy the custom configuration search/result when I come out or execute the saved search . already I tried the WD_DESTROY method in Mainview set level. But it is not working .
Regards.
ramaSolved my self.
Below is the solution. Go to the Viewset implementation class ZL_BT111S_O_MAINVIEWSET_IMPL and call the default configuration in method DO_CONFIG_DETERMINATION .
CALL METHOD me->set_config_keys
EXPORTING
iv_object_type = '<DEFAULT>'
iv_object_sub_type = '<DEFAULT>'
iv_propagate_2_children = 'X'.
regards,
rama -
Archiving problems JOB_OUTPUT_INFO-ARCHDONE
Hi all.
When we got our archiving connection down or deactivated for any reason and we issue a document, NAST table says the archived was done but thats not true, beacause there is no connection to archive.
I'm checkin JOB_OUTPUT_INFO-ARCHDONE after issue the form, but the value is set in X despite the archive failed.
How can I know after the smartform was issued if the archive was done ok or not?
Thanks.
Nico.-Hi Nico,
In TC OAC3 you should be able to see in which link table the object you are trying to archive is stored. (TOA01, 2 or 3). If your document exist in the link table, your document is in the archive.
Regards,
Martin -
We have an issue in moving the faulty source files to a particular directory.
We could move all successfully processed files to an archive directory, but not able to move the error files (having some data issue) to another archive directory, so that we can continue with processing of other files.
Module used:
AF_Modules/MessageTransformBean --> 1
Parameter:
Transform.PermanentErrors --> true
But this is not working.
I see a similar thread which is not answered:
Re: Archive Faulty Source File
Your inputs are appreciated.I actually have two scenarios:
1. Asynchronous
2. Synchronous
In Asynchronous; best part is, all files are getting processed, in spite if there are any erroneous files in between, at least it continues with next file. My problem here is, it archives all successful and error files to the same Archive directory; though I have mentioned different directory for u2018Faulty Source Fileu2019. And there are no errors when I see in the CC monitoring. But there is error in sxmb_moni (actually I have failed it intentionally)
Runtime exception occurred during application mapping com/sap/xi/tf/_CreateProductMaster_to_CreateProdu~; com.sap.aii.utilxi.misc.api.BaseRuntimeException:The element type "Products" must be terminated bythe matching end-tag "</Products>".
In Synchronous: If there are any error files in between: it never go to the next file, it is stuck in the same file and retries the same. And error in CC monitoring is:
Error: com.sap.engine.interfaces.messaging.api.exception.MessagingException: Application:EXCEPTION_DURING_EXECUTE:
In sxmb_moni
Runtime exception occurred during application mapping com/sap/xi/tf/_CreateProductMaster_to_CreateProdu~; com.sap.aii.utilxi.misc.api.BaseRuntimeException:The element type "Products" must be terminated bythe matching end-tag "</Products>". -
Archive directory user rights?
Hi all: I recently migrated our GW2014 SP1 server over to new hardware. The migration went smooth and our domain and postoffice seem happy. My users are reporting an 8201 error when starting up their client and I have traced it to a user rights issue with our archive folder on the server. I have given full rights to everyone as a temp measure, but I want to get the proper rights set up. BTW, the archive directory is on an NSS volume.
So, what are the proper user rights to the archive directory? Thanks much, Chris.Hi Chris,
They need Read, Write, File Scan, Create, Erase, Modify, Delete - all except Access Control and Supervisor.
Hope that helps.
Cheers, -
Problem with archiving print lists
Hi all,
we have an issue with asynchronous archive requests.
The archiving of some spool lists fails. Here the error message in OAM1:
Order History
12.03.2010 19:36:10 Request Created
12.03.2010 19:36:10 Asynchronous request started; awaiting confirmation
12.03.2010 19:36:10 Content Repository Reports Error
JobLog:
Job started
Step 001 started (program RSCMSCAJ, variant &0000000023175, user ID XXXXXX)
Error occurred during character conversion
SAP ArchiveLink (error in storage system)
Job cancelled after system exception ERROR_MESSAGE
It gets weird when I reprocess the request in OAM1, because then it works and we have no error message. It feels like when we start 50+ requests, some jobs fail. And if we start archiving of <10 requests everything works fine.
We tried archiving of 800+ spool lists in our QAS system with no problem. So we suspected, that our PRD content server may be broken or something. So we tried archiving spool lists from our SAP PRD system to our QAS content server -> same issue. The only thing left would be missconfiguration in our SAP PRD system but a few weeks ago everything worked fine and we had no issues.
Have somebody faced similar issues?
Regards
Davorhi,
Sorry i have to answer with questions:
are you archiving spools larger then 2 gig?
can you send use the logs from the content server?
can you check OAM1 if there is more info ( check logging)
Rgds
Nico
Maybe you are looking for
-
I bought a new iMac in December 2012. I bought the 64 bit version of Win 8 thinking that I could use Bootcamp, but when the new iMac's came out in 2012 Apple did not have drivers for WIN 8 so my Boot Camp partition was basically useless. I bought Par
-
Does firefox 5.0 exist and how do I get it? Zynga games says it does and recommends it
Zynga games makes mafia wars, cafe world, and other games for Facebook and Myspace. I have been having loading issues with their games and asked their customer support for help. The link referred to Mozilla Firefox 5.0 and Google Chrome 8.0, both of
-
I cannot connect to googles search in any browser Help!
It was working fine last night then cox interupted my internet connection becasue i forgot to pay last months bill, i payed it over the phone and the internet came back on in like 2 minutes. after that though google stopped working on any browser. i
-
Question about Technologies in JAVA
Hi all, I'm starting my final project for Computer Science in University and I pretend to develop an integrated e-learning environment for my faculty. I want to develop this by using JAVA WEB START within a JNLP, as a stand-alone application (not wit
-
dear all, i need to have the field 'provider' in my GENERAL vat report, but i cannot see it in transaction S_ALR_87012357. i saw that provider is in Italy VAT REPORT header but i need to have it in the report lines. Some of you can help me. Thanks in