VB script query - Splitting and rebuilding

I have a query string which includes the following :
Area=22&Area=25&Area=1021
This dynamic and can include many values or 1 for the Area
field.
I then take this step:
<%dim sarray
cnt = 0
sarray = split(request("Area"),",")
for each Area in sarray
tsql2 = Area
tsql = " '"&(tsql2)&"',"
cnt = cnt +1
dquery = dquery & tsql
next
response.Write("("&dquery&")")
%>
This produces ( '22', ' 25', ' 1021', ).
My problem is the trailing "," on the last value.
Any idea how to trim so I get this :( '22', ' 25', ' 1021')
Thanks in advance.

Add this line between "dquery=..." and "next":
dquery = Left(dquery, Len(dquery)-1)
That should chop that last comma off.
Best regards,
Chris.

Similar Messages

  • After Effects scripts, split and combine comps

    Hi All
    I have tried to search but to no avail so I turn to this cool forum to post my question.
    We have a charity fun run coming up, and I am part of the team that films the finish line.
    We record the finish line, we string the footage together into a comp in AE and we have this cool script that splits the comp up into 30 second rendered chunks. Also it staggers the split, so split 1 = 0 to 30 sec, split 2 = 15 to 45 sec, split 3 = 30 to 60 sec and so on. The goal of this is to create short video clips of 30 seconds were runners can see themselves cross the finish line. We stagger the script so the people who cross the line at say 29 sec can see themselves in the middle of the second clip, rather than the end of the first. Hopefully this all makes sense.
    We would like to create a separate video of a logo spinning in, as well as a quick thank you message. We can then append these to the start and the end of each 30 second video.
    So to my question...is there a way to create an AE script (or append the existing script) so that as AE splits up the video into 30 second chunks, it appends, say, the same separate 10 second comp (logo) at the start of each, and the same a 10 second comp (thank you for raising money) at the end of each of the 30 second clips?
    Hopefully the above makes sense. I am pretty familiar with AE, but not so much the scripting side.
    Any insight, suggestions, would be VERY welcome. Also FYI, we have over 1.5 hours of finish line footage, so LOTS of 30 second clips... hence why the splitting script was created in the first place.
    Thanks
    Lawrence

    is there a way to create an AE script (or append the existing script) so that as AE splits up the video into 30 second chunks, it appends, say, the same separate 10 second comp (logo) at the start of each, and the same a 10 second comp (thank you for raising money) at the end of each of the 30 second clips?
    Sure. Ask on the scripting forum, AEnhancers or hire one of the guys from AEScripts. And I'm sure it might even elicit volunteers if you just provided the code.
    Mylenium

  • Having multiple problems with script - NTFS Permissions and AD Groups

    Hi, all!  I'm having multiple problems with my first script I've written with Powershell.  The script below does the following:
    1. Prompts the user for a corporate division under which a shared folder will be created, and adjusts variables accordingly.
    2. Prompts if the folder will be a global folder or an office/location-specific folder, and makes appropriate adjustments to variables.
    3.  If a global folder, prompts for the name.  If an office/location-specific folder, prompts for each component of the street address, city and state and an optional modifier.  I've prompted for this information in this way because the information
    is used differently later on in the script.
    4.  Verifies the entered information and requests confirmation to proceed.
    5.  Creates the folder.
    6.  Creates an AD OU and/or security group(s).
    7.  Applies appropriate security groups to the new folder and removes undesired permissions.
    Import-Module ActiveDirectory
    $Division = ""
    $DivAbbr = ""
    $OU = ""
    $OUDrive = "AD:\"
    $FolderName = ""
    $OUName = ""
    $GroupName = ""
    $OURoot = "ou=DFS Restructure Testing OU,ou=Pennsylvania Camp Hill 4410 Industrial Park Rd,ou=Locations,ou=Camp Hill,dc=jacobsonco,DC=com"
    $FSRoot = "E:\"
    $FolderPath = ""
    $DefaultFolders = "Archive","Customer Service","Equipment","Inbounds","Management","Outbounds","Processes","Projects","Quality","Reports","Returns","Safety","Schedules","Time Keeping","Training"
    [bool]$Location = 0
    do {
    $userInput = Read-Host "Enter CLS Division: (W)arehousing, (S)taffing, or (P)ackaging"
    Switch ($userInput)
    W {$Division = "Warehousing"; $DivAbbr = "WHSE"; $OU = "ou=Warehousing,"; break}
    S {"Staffing is not yet implemented."; break}
    P {"Packaging is not yet implemented."; break}
    default {"Invalid choice. Please re-enter."; break}
    while ($DivAbbr -eq "")
    write-host ""
    write-host ($Division + " was selected.")
    $FolderPath = $Division + "\"
    write-host ""
    $choice = ""
    do {
    $choice = Read-Host "Will this be a (G)lobal folder or (L)ocation folder?"
    Switch ($choice)
    G {$Location = $false; break}
    L {$Location = $true; $FolderPath = $FolderPath + "Locations\"; $OU = "ou=Locations," + $OU; break}
    default {"Invalid choice. Please re-enter."; $choice = ""; break}
    while ($choice -eq "")
    write-host ""
    write-host ("Location is set to: " + $Location)
    write-host ""
    if ($Location -eq $false) {
    $FolderName = Read-Host "Please enter folder name:"
    $GroupName = $DivAbbr + " " + $FolderName
    } else {
    $input = Read-Host "Please enter two-letter state abbreviation:"
    $FolderName = $FolderName + $input + " "
    $input = Read-Host "Please enter city:"
    $FolderName = $FolderName + $input + " "
    $input = Read-Host "Please enter street address number only:"
    $FolderName = $FolderName + $input
    $GroupName = $DivAbbr + " " + $FolderName
    $FolderName = $FolderName + " "
    $input = Read-Host "Please enter street name:"
    $FolderName = $FolderName + $input
    $input = Read-Host "Please enter any optional information to appear in folder name:"
    if ($input -ne "") {
    $FolderName = $FolderName + " " + $input
    $OUName = $FolderName
    write-host
    write-host "Path for folder: "$FSRoot$FolderPath$FolderName
    write-host "AD Path: "$OUDrive$OU$OURoot
    write-host "New OU Name: "$OUName
    write-host -NoNewLine "New Security Group names: "$GroupName
    if ($Location -eq $true) { write-host " and "$GroupName" MGMT" }
    write-host
    $input = Read-Host "Please confirm creation of new site/folder: (Y/N) "
    if ($input -ne "Y") { Exit }
    write-host
    write-host -NoNewLine "Folder exists: "; Test-Path ($FSRoot + $FolderPath + $FolderName)
    if (Test-Path ($FSRoot + $FolderPath + $FolderName)) {
    Write-Host "Folder already exists! Skipping folder creation..."
    } else {
    write-host "Folder does not exist. Creating..."
    new-item -path ($FSRoot + $FolderPath) -name $FolderName -itemtype directory
    Set-Location ($FSRoot + $FolderPath + $FolderName)
    if ($Location -eq $true) {
    $tempOUName = "ou=" + $OUName + ","
    write-host
    write-host $OUDrive$tempOUName$OU$OURoot
    write-host
    write-host -NoNewLine "OU exists: "; Test-Path ($OUDrive + $tempOUName + $OU + $OURoot)
    if (Test-Path ($OUDrive + $tempOUName + $OU + $OURoot)) {
    Write-Host "OU already exists! Skipping OU creation..."
    } else {
    write-host "OU does not exist. Creating..."
    New-ADOrganizationalUnit -Name $OUName -Path ($OU + $OURoot) -ProtectedFromAccidentalDeletion $false
    $GroupNameMGMT = $GroupName + " MGMT"
    if (!(Test-Path ($OUDrive + "CN=" + $GroupName + "," + $tempOUName + $OU + $OURoot))) { write-host "Normal user group does not exist. Creating..."; New-ADGroup -Name $GroupName -GroupCategory Security -GroupScope Global -Path ("OU=" + $OUName + "," + $OU + $OURoot)}
    if (!(Test-Path ($OUDrive + "CN=" + $GroupNameMGMT + "," + $tempOUName + $OU + $OURoot))) { write-host "Management user group does not exist. Creating..."; New-ADGroup -Name $GroupNameMGMT -GroupCategory Security -GroupScope Global -Path ("OU=" + $OUName + "," + $OU + $OURoot)}
    $FolderACL = get-acl ($FSRoot + $FolderPath + $FolderName)
    $FolderACL.SetAccessRuleProtection($True,$True)
    # $FolderACL.Access | where {$_.IdentityReference -eq "BUILTIN\Users"} | %{$FolderACL.RemoveAccessRuleAll($_)}
    $BIUsers = New-Object System.Security.Principal.NTAccount("BUILTIN\Users")
    $BIUsersSID = $BIUsers.Translate([System.Security.Principal.SecurityIdentifier])
    write-host $BIUsersSID.Value
    # out-string -inputObject $BIUsers
    $Ar = New-Object System.Security.AccessControl.FileSystemAccessRule($BIUsersSID.Value,"ReadAndExecute,AppendData,CreateFiles,Synchronize","ContainerInherit, ObjectInherit", "None", "Allow")
    $FolderACL.RemoveAccessRuleAll($Ar)
    Set-ACL ($FSRoot + $FolderPath + $FolderName) $FolderACL
    get-acl ($FSRoot + $FolderPath + $FolderName) | fl
    $FolderACL = get-acl ($FSRoot + $FolderPath + $FolderName)
    $ADGroupName = "JACOBSON\" + $GroupName
    $objUser = New-Object System.Security.Principal.NTAccount($ADGroupName)
    $objUser.Translate([System.Security.Principal.SecurityIdentifier]).Value
    write-host $ADGroupName
    write-host $objUser.Value
    $Ar = New-Object System.Security.AccessControl.FileSystemAccessRule($ADGroupName,"ReadAndExecute","ContainerInherit, ObjectInherit", "None", "Allow")
    Out-String -InputObject $ar
    $FolderACL.AddAccessRule($Ar)
    $ADGroupName = "JACOBSON\" + $GroupNameMGMT
    $Ar = New-Object System.Security.AccessControl.FileSystemAccessRule($ADGroupName, "Modify", "ContainerInherit, ObjectInherit", "None", "Allow")
    Out-String -InputObject $ar
    $FolderACL.AddAccessRule($Ar)
    Set-ACL ($FSRoot + $FolderPath + $FolderName) $FolderACL
    } else {
    $tempOUName = "cn=" + $GroupName + ","
    write-host
    write-host $OUDrive$tempOUName$OU$OURoot
    write-host
    write-host -NoNewLine "Group exists: "; Test-Path ($OUDrive + $tempOUName + $OU + $OURoot)
    if (Test-Path ($OUDrive + $tempOUName + $OU + $OURoot)) {
    Write-Host "Security group already exists! Skipping new security group creation..."
    } else {
    write-host "Security group does not exist. Creating..."
    New-ADGroup -Name $GroupName -GroupCategory Security -GroupScope Global -Path ($OU + $OURoot)
    $FolderACL = get-acl ($FSRoot + $FolderPath + $FolderName)
    $ADGroupName = "JACOBSON\" + $GroupName
    $FolderACL.SetAccessRuleProtection($True,$True)
    $Ar = New-Object System.Security.AccessControl.FileSystemAccessRule($ADGroupName,"Modify","ContainerInherit, ObjectInherit", "None", "Allow")
    $FolderACL.AddAccessRule($Ar)
    $FolderACL.Access | where {$_.IdentityReference -eq "BUILTIN\Users"} | %{$FolderACL.RemoveAccessRuleAll($_)}
    Set-ACL ($FSRoot + $FolderPath + $FolderName) $FolderACL
    My problems right now are in the assignment/removal of security groups on the newly-created folder, and the problems are two-fold.  Yes, I am running this script as an Administrator.
    First, I am unable to remove the BUILTIN\Users group from the folder when this is an office/location-specific folder.  I've tried to remove the group in several different ways, and none are having any effect.  Oddly, if I type in the lines directly
    into Powershell, they work as expected.  I've tried the following methods:
    $FolderACL = get-acl ($FSRoot + $FolderPath + $FolderName)
    $FolderACL.SetAccessRuleProtection($True,$True)
    $FolderACL.Access | where {$_.IdentityReference -eq "BUILTIN\Users"} | %{$FolderACL.RemoveAccessRuleAll($_)}
    Set-ACL ($FSRoot + $FolderPath + $FolderName) $FolderACL
    $FolderACL = get-acl ($FSRoot + $FolderPath + $FolderName)
    $FolderACL.SetAccessRuleProtection($True,$True)
    $BIUsers = New-Object System.Security.Principal.NTAccount("BUILTIN\Users")
    $BIUsersSID = $BIUsers.Translate([System.Security.Principal.SecurityIdentifier])
    $Ar = New-Object System.Security.AccessControl.FileSystemAccessRule($BIUsersSID.Value,"ReadAndExecute,AppendData,CreateFiles,Synchronize","ContainerInherit, ObjectInherit", "None", "Allow")
    $FolderACL.RemoveAccessRuleAll($Ar)
    Set-ACL ($FSRoot + $FolderPath + $FolderName) $FolderACL
    In the first case, the script goes through and has no apparent effect because afterwards, I do a get-acl and the BUILTIN\Users group is still there, although when looking through the GUI, inheritance appears to have been broken from the parent folder.
    In the second case, I get the following error message:
    Exception calling "RemoveAccessRuleAll" with "1" argument(s): "Some or all identity references could not be translated."
    At C:\Users\tesdallb\Documents\FileServerBuild.ps1:110 char:5
    +     $FolderACL.RemoveAccessRuleAll($Ar)
    +     ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        + CategoryInfo          : NotSpecified: (:) [], MethodInvocationException
        + FullyQualifiedErrorId : IdentityNotMappedException
    This seems strange that the local server is unable to translate the SID of a BUILTIN account.  I've also tried explicitly putting in the BUILTIN\Users SID in place of the variable in the New-Object line, but that gives me the same error.  I've
    also tried the solutions given in this thread:
    http://social.technet.microsoft.com/Forums/windowsserver/en-US/ad59dc58-1360-4652-ae09-2cd4273cbd4f/remove-acl-issue?forum=winserverpowershell and at this URL:
    http://technet.microsoft.com/en-us/library/ff730951.aspx but these solutions also failed to have any effect.
    My second problem is when I try to apply the newly-created security groups, I also will get the "Some or all identity references could not be translated."  I thought I had found a workaround to the problem by adding the -PassThru option to
    the New-ADGroup commands, because it would output the SID of the group after creation, however a few lines later, the server is unable to translate the account to apply the security groups to the folder.
    My first Powershell script has been working well up to this point and now I seem to have hit a showstopper.  Any help is appreciated.
    Thanks!

    I was hoping to stay with strictly Powershell, but unless I can find a Powershell solution, I may resort to ICACLS.
    As for the problems with my groups not being translatable right after creating them, I think I have solved this problem by using the -Server parameter on all my New-ADGroup commands and this example code seems to have gotten around the translation problem,
    again utilizing the -Server parameter on the Get-ADGroup command:
    get-acl ($FSRoot + $FolderPath + $FolderName) | fl
    $FolderACL = get-acl ($FSRoot + $FolderPath + $FolderName)
    # Add the new normal users group to the folder with Read and Execute permissions
    $GroupSID = Get-ADGroup -Identity $GroupName -Server chadc01.jacobsonco.com | Select-Object -ExpandProperty SID
    $SIDIdentity = New-Object System.Security.Principal.SecurityIdentifier($GroupSID)
    $Ar = New-Object System.Security.AccessControl.FileSystemAccessRule($SIDIdentity,"ReadAndExecute","ContainerInherit, ObjectInherit", "None", "Allow")
    $FolderACL.AddAccessRule($Ar)
    # Add the management users group to the folder with Modify permissions
    $GroupMGMTSID = Get-ADGroup -Identity $GroupNameMGMT -Server chadc01.jacobsonco.com | Select-Object -ExpandProperty SID
    $SIDIdentity = New-Object System.Security.Principal.SecurityIdentifier($GroupMGMTSID)
    $Ar = New-Object System.Security.AccessControl.FileSystemAccessRule($SIDIdentity, "Modify", "ContainerInherit, ObjectInherit", "None", "Allow")
    $FolderACL.AddAccessRule($Ar)
    Set-ACL ($FSRoot + $FolderPath + $FolderName) $FolderACL
    Going this route seems to ensure that the Domain Controller I'm creating my groups on is the same one that I'm querying for the group's SID to use in the FileSystemAccessRule.  It's been working fairly consistently.
    Still having issues with the translation of the BUILTIN\Users group, though. 

  • Unicode export:Table-splitting and package splitting

    Hi SAP experts,
    I know there are lot of forums related to this topic, but I have some new questions and hence posting a new thread.
    We are in the process of doing unicode conversion in our landscape(CRM 7.0 system based on NW 7.01) and we are running on AIX 6.1 and DB2 9.5. The database size is around 1.5 TB and hence, we want to go in for optimization for export and import in order to reduce the downtime.As a part of the process, we have tried to do table-splitting and parallel export-import to reduce the downtime.
    However, we are having some doubts whether this table-splitting has actually worked in our scenario,as the export has exeucted for nearly 28 hours.
    The steps followed by us :
    1.) Doing the export preparation using SAPINST
    2.) Doing table splitting preparation, by creating a table input file having entries in the format <tablename>%<No.of splits>.Also, we have used the latest R3ta file and the dbdb6slib.o(belonging to version 7.20 even though our system is on 7.01) using SAPINST.
    3.) Starting with the export using SAPINST.
    some observations and questions:
    1.) After completion of tablesplitting preparation, there were .WHR files that were generated for each of the tables in DATA directory of export location. However, how many .WHR files should be created and on what basis are they created?
    2.) I will take an example of a table PRCD_CLUST(cluster table) in our environment, which we had split. We had 29 *.WHR files that were created for this particular table. The number of splits given for this table was 36 and the table size is around 72 GB.Also, we noticed that the first 28 .WHR files for this table, had lots of records but the last 29th .WHR file, had only 1 record. But we also noticed that, the packages/splits for the 1st 28 splits were created quite fast but the last one,29th one took a long time(serveral hours) to get completed.Also,lots of packages were generated(around 56) of size 1 GB each for this 29th split. Also, there was only one R3load which was running for this 29th split, and was generating packages one by one.
    3.) Also,Our question here is that is there any thumb rule for deciding on the number of splits for a table.Also, during the export, are there any things that need to be specified, while giving the inputs when we use table splitting,in the screen?
    4.) Also, what exactly is the difference between table-splitting and package-splitting? Are they both effective together?
    If you have any questions and or need any clarifications and further inputs, please let me know.
    It would be great, if we could get any insights on this whole procedure, as we know a lot of things are taken care by SAPINST itself in the background, but we just want to be certain that we have done the right thing and this is the way it should work.
    Regards,
    Santosh Bhat

    Hi,
    First of all please ignore my very first response ... i have accidentally posted a response to some other thread...sorry for that 
    Now coming you your questions...
    > 1.) Can package splitting and table-splitting be used together? If yes or no, what exactly is the procedure to be followed. As I observed that, the packages also have entries of the tables that we decided to split. So, does package splitting or table-splitting override the other, and only one of the two can be effective at a time?
    Package splitting and table splitting works together, because both serve a different purpose
    My way of doing is ...
    When i do package split i choose packageLimit 1000 and also split out the tables (which i selected for table split)  into seperate package (one package per table). I do it because that helps me to track those table.
    Once the above is done i follow it up with the R3ta and wheresplitter for those tables.
    Followed by manual migration monitor to do export/import , as mentioned in the previous reply above you need to ensure you sequenced you package properly ... large tables are exported first , use sections in the package list file , etc
    > 2.) If you are well versed with table splitting procedure, could you describe maybe in brief the exact procedure?
    Well i would say run R3ta (it will create multiple select queries) followed by wheresplitter (which will just split each of the select into multiple WHR files)  ...  
    Best would go thought some document on table spliting and let me know if you have specific query. Dont miss the role of hints file.
    > 3.) Also, I have mentioned about the version of R3ta and library file in my original post. Is this likely to be an issue?Also, is there a thumb rule to decide on the no.of splits for a table.
    Rule is use executable of the kernel version supported by your system version. I am not well versed with 7.01 and 7.2 support ... to give you an example i should not use 700 R3ta on 640 system , although it works.
    >1.) After completion of tablesplitting preparation, there were .WHR files that were generated for each of the tables in DATA directory of export location. However, how many .WHR files should be created and on what basis are they created?
    If you ask for 10 split .... you will get 10 splits or in some case 11 also, the reason might be the field it is using to split the table (the where clause). But not 100% sure about it.
    > 2) I will take an example of a table PRCD_CLUST(cluster table) in our environment, which we had split. We had 29 *.WHR files that were created for this particular table. The number of splits given for this table was 36 and the table size is around 72 GB.Also, we noticed that the first 28 .WHR files for this table, had lots of records but the last 29th .WHR file, had only 1 record. But we also noticed that, the packages/splits for the 1st 28 splits were created quite fast but the last one,29th one took a long time(serveral hours) to get completed.Also,lots of packages were generated(around 56) of size 1 GB each for this 29th plit. Also, there was only one R3load which was running for this 29th split, and was generating packages one by one.
    Not sure why you got 29 split when you asked for 36, one reason might be the field (key) used for split didn't have more than 28 unique records. I dont know how is PRCD_CLUST  split , you need to check the hints file for "key". One example can be suppose my table is split using company code, i have 10 company codes so even if i ask for 20 splits i will get only 10 splits (WHR's).
    Yes the 29th file will always have less records, if you open the 29th WHR you will see that it has the "greater than clause". The 1st and the last WHR file has the "less than" and "greater than" clause , kind of a safety which allows you to prepare for the split even before you have downtime has started. This 2 WHR's ensures  that no record gets missed, though you might have prepared your WHR files week before the actual migration.
    > 3) Also,Our question here is that is there any thumb rule for deciding on the number of splits for a table.Also, during the export, are there any things that need to be specified, while giving the inputs when we use table splitting,in the screen?
    Not aware any thumb rule. First iteration you might choose something like 10 for 50 GB , 20 for 100 GB. If any of the tables overshoots the window. They you can give a try by  increase or decrease the number of splits for the table. For me couple of times the total export/import  time have improved by reducing the splits of some tables (i suppose i was oversplitting those tables).
    Regards,
    Neel
    Edited by: Neelabha Banerjee on Nov 30, 2011 11:12 PM

  • How does one strip out all Live Cycle data from a PDF and rebuild the form fields in Acrobat?

    Someone in a different department built a bunch of forms in Live Cycle. We now need to make minor edits to these forms but we all have Macs and can't use Live Cycle. Currently our only option to change a date and a name on each form  is to buy a new Windows workstation, buy a copy of Live Cycle and train someone for it.
    I understand the Live Cycle technology and Acrobat technology for forms are somehow different but there must be a way to just strip out all the Live Cycle form programming so that I just have the bare PDF with the text and layout.  Then make the text edits and rebuild the form fields in Acrobat.

    It depends on your PDF. Is the PDF a static XFA or a dynamic XFA?
    You can check to see if the PDF is static/dynamic by clicking File=>Save As, and it should say static or dynamic PDF as file type.
    iText will work with Static XFA forms created in LiveCycle. Dynamic XFA forms are not supported.
    You can also submit XML data to a server side script and parse the XML data using C# system.xml.xmlreader.
    Another tool that may speed the development of the project is:
    http://www.fdftoolkit.net/
    Note: FDFToolkit.net utilizes iText Technologies.

  • Convert sap script to pdf and send mail before close_form

    hi experts,
    I am converting a sap script to PDF and then sending that pdf to vendor mail ids.
    I am getting the Data for the conversion of pdf From close_form.But it contains the data for all the vendors . But i have to Send the mail to the specific vendors.For ex if my script output has 5 sheets each with different vendors . I have to send  1 sheet as a pdf mail for that particular vendor. so,1 sheet each for 5 differrent vendors . But the data i get from close_form is the data for all the 5 vendors. How to split the data ?. can any one help me on this issue.
    with thanks in advance,
    syed

    Hi,
        Change your driver program so that it calls the script n no of times .. and every time send mail to particular vendor ..
        Loop at vendors.
         call the form ...
         send mail ...
        endloop ...
    Regards,
    Srini.

  • Output from dbms_output.put_line splits and move to next line

    Hi All,
    I am printing out a list using dbms_output.put_line its like
    One or more of following Required Parameters are missing:
    1. Primary Field
    2. Structure Field
    3. Structure
    Table
    4. List File Name
    5. Query Directory
    6. Query String
    but I don't know why third option is splitting and moving to second line. any idea? its not that long even then.
    thanks

    set linesize 150
    or set it as per your requirement

  • SuPM - Data from SAP BW Query(Automatic) and Manual not displayed in KPI

    Dear Forum,
    I am also, currently working on a project implementing BO SuPM Version 1.0. Landscape is ECC -> BI -> SuPM dashboards.
    I have created a KPI in SuPM dashboard.
    Case 1 : Automatic data collection - This KPI is marked for Automatic data collection. So, maintained Scripts with Connectivity and SAP BW Query name. I save this KPI and include in a new report and Run it. There are no records displayed.
    Case 2: Manual data collection - This KPI is marked for manual data collection. So, maintained values in SuPM portal itself. I save this KPI and include in a new report and Run it. There are no records displayed.
    Could you please help me how KPI in SuPM be filled manually and automatically ( from BW Query)?
    Thanks,
    Best Regards
    PhaniRaj

    Dear Phani,
    First of All, you should have a framework created and assign the Core KPI to that framework and specify the frequency of the framework (Say Monthly) and Activate it.
    Upon Activation, if it is a Manual Data Collection, then you should perform Role Assignment ie specifyinfwho would be the Business Contributor & Approver for ur Manual KPI.
    After this, you have to run this program [/SRCORE/DATAREQUEST] in SE38 in the backend system which wil create a data request,
    Later you should login as Business Contributor and go to MY Data Requests and then provide the data for the Month specified. Then login as Approver and under Approvals option, you should Approve the data. Now, you should go to SuPM application and select that Month in the dimension and KPI and then RUN the Report.
    For Automated Data Collection, you should have to initially creata a Query (SAP/ BW) and specify the name of the query in SPRO--> Sustainability Performance Management --> Automated Data Collection --> Maintain Queries (SAP/ BW) -->  Specify that Query name and the Connector ID (SM59).
    Now go for KPI creation and specify the Query name in the KPI and then Assign the KPI to the framework and activate it.
    Upon Activation, You should RUN the program /SRCORE/AUTO_DATA_COLLECT. Now, select this KPI in the report and RUN the report.
    If you perform these actions, you will get the values in the report.
    It can be checked in the backend application, either by checking the Process chain[RSPC] or by checking the Infocube if the data is loaded into the Infocube or not.
    Let me know if you face any problems...
    Regards,
    Raghu

  • Script to split pdf into sections

    Hello
    I have an awful lot of books in pdf format
    I now have to be able to split them into their chapters, I'm assuming I'll have to make a text file up for each book to tell the script where each chapter starts
    I've done some scripting in Indesign and Quark on the Mac OS X but am yet to do anything in Acrobat, I'd want something that could loop through the list of page numbers for each chapter start and extract each chapter as a seperate pdf
    I'd be grateful for pointers to anything useful, I've found bit and bobs with google but nothing that useful so far
    I'm decent at canablising, hpeless working from scratch
    Thanks
    Tynan

    Use the extractPages document method: http://livedocs.adobe.com/acrobat_sdk/9.1/Acrobat9_1_HTMLHelp/JS_API_AcroJS.88.465.html

  • How to show character length 60 ,If we split and distribute..??

    Hi Friends,
    As per other threads ,It was mentioned like split the string into parts depending upon length and split and send it into different Infoobjects....It is ok ..after sending it into different objects ..how can we see it in query as a single string..Is it by using customer exit variables kind of things??

    MSTF ,
    What you can use is , You can use the table modifier and concatenate the three columns into one at runtime and hide the other two columns of the table. But again I assume you are using BeX.
    Arun Varadarajan
    P.S Otherwise store the comments in a BW Internal Table and then display the data in the concerned cell at runtime using the table modifier.

  • How to track BIA delta index merge and rebuild

    Hi Gurus,
                 I have set up process chains which load the BIA delta index and full index rebuild in separate process chains.
    Is there any way to check if BIA is behaving as desired? I know to check RSDDSTATTREX table.
    Regards,
    Anil

    Not a reply from Guru..
    But I put my thoughts in and here are few take aways for you ,which you may wish to have a look in to..
    PS - BI Accelerator is quite a Virgin territory ..few documentations or persons worked with ..
    You can execute some performance checks with tcode - RSRV to validate the same.
    <b>BI Accelerator Performance Checks</b>
    1)Size of Delta Index
    If you have chosen delta mode for an index of a table, new data is not written to the main index but to the delta index. This can significantly improve performance during indexing. However, if the delta index is large, this can have a negative impact on performance when you execute queries. When the delta index reaches 10% of the main index, the system displays a warning.
    The system performs a merge for the index in repair mode. The settings are retained.
    2) Propose Delta Index for Indexes
    It is useful to create a delta index for large indexes that are often updated with new data. New data is not written to the main index, but to the delta index. This can significantly improve the performance of indexing, since the system only performs the optimize step on the smaller set of data from the delta index. The data from the delta index is used at the runtime of the query.
    The system determines proposals from the statistics data: Proposals are those indexes that received new data more than 10 times during the last 10 days. A prerequisite for these proposals is that the statistics for the InfoCube are switched on.
    Data in the main index and delta index should be merged at regular intervals (see test Size of Delta Index).
    In repair mode, the system sets the Has Delta Indexproperty for the proposed indexes. The delta index is created when the data is next loaded for this index.
    3)Compare Size of Fact Tables with Fact Index
    The system calculates the number of records in both fact tables (E and F tables) for the InfoCube and compares them with the number of records in the fact index of the BI accelerator index. If the number of records in the BI accelerator index is significantly greater than the number in the InfoCube (more than a 10% difference), you can improve query performance by rebuilding the BIA index.
    The following circumstances can result in differences in the numbers of records:
    &#9675;       The InfoCube was compressed after the BI accelerator index was built. Since the BI accelerator index is not compressed, it may contain more records than the InfoCube.
    &#9675;       Requests were deleted from the InfoCube after the BI accelerator index was built. The requests are deleted from the BIA index in the package dimension only. The records in the fact index are therefore no longer referenced and no longer taken into account when the query is executed; however, they are not deleted.
    http://help.sap.com/saphelp_nw2004s/helpdata/en/6b/cda64246c6c96ae10000000a155106/content.htm
    Hope it Helps
    Chetan
    @CP..

  • Script to search and relink linked image files that were moved?

    Good afternoon
    My indesign product catalog has links with a lot of different folders around in my hard disk, not a single folder. And to keep my catalog up to date I must package it.
    When I move my linked files in my computer because I change the folder structure or do some cleaning, my indd loses track of its linked files. Currently I need to relink one by one.
    Therefore I would like to find a script which would scan the hard disk with the all broken links in mind to detect their new location and relink. Yes I am a lazy person.
    Thanks for any clue,

    Thanks so much! I will try soonest.
    How does your script react in the unlikely case of duplicates?
    Date: Thu, 18 Jun 2009 12:43:47 -0600
    From: [email protected]
    To: [email protected]
    Subject: Script to search and relink linked image files that were moved?
    At least your honest about your laziness.... Here's your reward:
    Just a hint, don't select root of your volume, it will loop through every folder in existence, but key to this script is, if you know the general location, select into that folder hierarchy and the script will test for a relative path, otherwise, come back in a few days, and it will be finished.
    ~mike
    var processed = 0
    var skipped = 0
    var updated = 0
    if (app.documents.length > 0){
        if(app.activeDocument.links.length > 0){
            var mydoc = app.activeDocument;
            var mylinks = mydoc.links;
            var myRoot = Folder.selectDialog("Choose the volume or server where assets are located", undefined, false);
            main();
    else
    { alert("No Links present")
    else{alert("No Documents Open")
    function main(){
        if(myRoot != null){
            for(var i = 0; i < mylinks.length ; i++){
                if(mylinks.item(i).status == LinkStatus.linkMissing){
                        var linkdata= mylinks.item(i).filePath;
                        var my_result = linkRepair(linkdata);
                            if (my_result == false){
                                var filetype = "." + mylinks.item(i).linkType;
                                var mysearch =   search(linkdata, filetype);
                                    if(mysearch != undefined){
    var myswitch = confirm(mylinks.item(i).name + " has been found in a different location, Relink and Update?", "Relinker")
                                            if(myswitch){
                                                mylinks.item(i).relink(File(mysearch));
                                                mylinks.item(i).update();
                                else{
                                    alert("" + mylinks.item(i).name " was not found\nFolders processed: "processed + "\nFiles skipped: "+ skipped)
                            else{ alert( my_result + " has been found");
                                mylinks.item(i).relink(File(my_result));
    function linkRepair(linkdata){
    var mypath = linkdata.split(":")
            my_status = analyzePath(mypath)
            if(my_status == true){alert("UPDATED");
                    updated++       
        return my_status
    function analyzePath(mypath){
    var num = mypath.length ;
    for(var i = 0; i < num-1; i++){
        mypath.shift()
        var newpath = pathRebuild(mypath)
        newpath = (myRoot + newpath)
            if(File(newpath).exists){
                return newpath
        return false
    function pathRebuild(pathArray){
    var solidPath = ""
        for(var i = 0; i < pathArray.length ; i++){
            solidPath +=  "/" + pathArray+ ;
        return solidPath;
    function search(linkdata, filetype){
        var mypath = linkdata.split(":")
        var mylink = mypath.pop();
        var OK = confirm("Relative Path does not exist, search folder hiearchy for file?", true, "File Scanner")
        if(OK){
    var myscanresult = getfiles(mylink, myRoot)
    return myscanresult
    function getfiles(mylink, myBase){
    myBase = Folder(myBase);
    var files = myBase.getFiles("*")
    for(var i = 0; i < files.length; i++){
    try{
    var foldertest = files+.getFiles();
    processed++
    var myfile =     File(files+ + "/" + mylink)
        if(myfile.exists == true){
            return myfile
            break;
        else{
        var myscan = getfiles(mylink, Folder(files+))    
            if(myscan != undefined){
            return myscan
            break;
    catch(myerror){
        skipped++
    >

  • StringTokenizer vs. split and empty strings -- some clarification please?

    Hi everybody,
    I posted a question that was sort of similar to this once, asking if it was best to convert any StringTokenizers to calls to split when parsing strings, but this one is a little different. I rarely use split, because if there are consecutive delimiters, it gives empty strings in the array it returns, which I don't want. On the other hand, I know StringTokenizer is slower, but it doesn't give empty strings with consecutive delimiters. I would use split much more often if there was a way to use it and not have to check every array element to make sure it isn't the empty string. I think I may have misunderstood the javadoc to some extent--could anyone explain to me why split causes empty strings and StringTokenizer doesn't?
    Thanks,
    Jezzica85

    Because they are different?
    Tokenizers are designed to return tokens, whereas split is simply splitting the String up into bits. They have different purposes
    and uses to be honest. I believe the results of previous discussions of this have indicated that Tokenizers are slightly (very
    slightly and not really meaningfully) faster and tokenizers do have the option of return delimiters as well which can be useful
    and is a functionality not present in just a straight split.
    However. split and regex in general are newer additions to the Java platform and they do have some advantages. The most
    obvious being that you cannot use a tokenizer to split up values where the delimiter is multiple characters and you can with
    split.
    So in general the advice given to you was good, because split gives you more flexibility down the road. If you don't want
    the empty strings then yes just read them and throw them away.
    Edited by: cotton.m on Mar 6, 2008 7:34 AM
    goddamned stupid forum formatting

  • Create a follow up page in scripts using Duplex and Tumble Duplex in print

    How to create a follow up page in scripts using Duplex and Tumble Duplex in print mode of scripts ?

    it depends upon output device types.
    Regards
    Prabhu

  • Mail won't open.  It then says quit and rebuild and won't do that.  How to fix?

    So I try to open mail but it hangs.  Then gives the option to quit or quit and rebuild.  It will quit but when I quit and rebuild it hangs and then just shuts off without doing anything.  Not quite sure what to do.
    Any help would be great.
    Steve

    You would have to reinstall the OS using the Recovery Volume.
    Try running the combo update.
    10.8.5 Combo Update
    If that doesn't work,try setting up another admin user account to see if the same problem continues. If Back-to-My Mac is selected in System Preferences, the Guest account will not work. The intent is to see if it is specific to one account or a system wide problem. This account can be deleted later.
    Isolating an issue by using another user account
    If the problem is still there, try booting into the Safe Mode using your normal account.  Disconnect all peripherals except those needed for the test. Shut down the computer and then power it back up after waiting 10 seconds. Immediately after hearing the startup chime, hold down the shift key and continue to hold it until the gray Apple icon and a progress bar appear. The boot up is significantly slower than normal. This will reset some caches, forces a directory check, and disables all startup and login items, among other things. When you reboot normally, the initial reboot may be slower than normal. If the system operates normally, there may be 3rd party applications which are causing a problem. Try deleting/disabling the third party applications after a restart by using the application un-installer. For each disable/delete, you will need to restart if you don't do them all at once.
    Safe Mode
    Safe Mode - Abou

Maybe you are looking for