Trying to read/parse a JSON file with idocscript in 10gr4
Hello all,
I have checked a JSON file into Content Server and would like to parse the data in it and use it to define which wcmPlaceholder to use on my current page template.
I have tried a few things including ssIncludeXml (which returned "Failed to load XML file with Content ID..."). For business user reasons I cannot keep the data in a table or view. JSON was the best way to maintain and pass around the data and it is about 48 "rows" of data.
Any suggestions how to accomplish this?
Thank you,
Audrey
Thanks @Bex!!
Thanks to reading your blog I have become well versed with idocsript and json and AJAX on the client side. Much that I have already accomplished is because of what you wrote! However, that occurs too late to determine what placeholder to include in the page template.
Is there a way to make the AJAX call on the server side to populate a variable available to the page template and then pass into the wcmPlaceholder? Do the docLoadResourceIncludes occur (and complete) before the placeholders are placed?
I am not daunted by the XPath but I am not sure how to 'store those flags in Site Studio XML format'?
[BTW- I had been handling this issue by requiring a "division" variable in the referrer URL but the clients have come back and want legacy URLs supported without the variable since once in the page I have shown we can determine the "division" from the JSON. ]
Thanks,
Audrey
Similar Messages
-
Reading/Parsing an EXE file with Java
Hey guys,
Is there a way (in Java) to parse an EXE file and get its version, description, etc? (mostly the information in the VERSION tab inside the file properties window).
Thanks.I can't find a thing about that. :(
Perhaps I'll tell you what I need that for:
I got tired of rearranging my start menu and drag-n-droppin' every time I install a new program, so I wanted to create an application that scans a folder of my choice (i.e. c:/progra~1) and creates a folder in the start menu with shortcuts to all of the EXEs in the folder and in its subfolders, arranged by application name (some applications have more than one EXE, including uninstall or update programs) and usage (that I'll do at the end, basing on rules and lists I'll create from DOWNLOAD.COM, for instance). I've done everything but the usage types filtering (putting it into folders: "system", "media", "internet", etc).
So my problem was, that the shortcuts' names are ugly most of the times, and I can't tell the full name of an application just from its filename. For example, Google Earth's main EXE is "googleearth.exe". A shortcut that says "Googleearth" isn't so nice to look at. also, with this kind of name you can know what it does, but what about other filenames that don't exactly say what that file does?
I needed a way to get the -true- name of the application file, and the only way I see is through the properties, in the "Description" field under "Version".
But alas, that's not so simple :P
I thought about simply getting the name from the folder the EXE's in, but then there are more than one EXE per folder.
Any other suggestions will be great.
Thanks again, guys. -
I have written a binary file with a specific header format in LABVIEW 8.6 and tried to Read the same Data File, Using LABVIEW 7.1.Here i Found some difficulty.Is there any way to Read the Data File(of LABVIEW 8.6), Using LABVIEW 7.1?
I can think of two possible stumbling blocks:
What are your 8.6 options for "byte order" and "prepend array or string size"?
Overall, many file IO functions have changed with LabVIEW 8.0, so there might not be an exact 1:1 code conversion. You might need to make some modifications. For example, in 7.1, you should use "write file", the "binary file VIs" are special purpose (I16 or SGL). What is your data type?
LabVIEW Champion . Do more with less code and in less time . -
How to parse a flat file with C#
I need to parse a flat file with data that looks like
01,1235,555
02,2135,558
16,156,15614
16,000,000
You get the idea. Anyway, I'd like to just used a derived column and move on except I need to put a line number on each row as it comes by so the end looks like,
1,01,1235,555
2,02,2135,558
3,16,156,15614
4,16,000,000
I'm trying to do with a script transformation but I can't seem to get the hang of the syntax. I've tried looking at various examples but everybody seems to prefer VB and I'd like to keep all of my packages C#. I've set up my input and my output columns I just
need to figure out how to write the code that says something like:
row_number = 1
line_number = row_number
record_type = input.split.get the second data element
data_point_1 = input.split.get the third data element
row_number = row_number ++/* Microsoft SQL Server Integration Services Script Component
* Write scripts using Microsoft Visual C# 2008.
* ScriptMain is the entry point class of the script.*/
using System;
using System.Data;
using Microsoft.SqlServer.Dts.Pipeline.Wrapper;
using Microsoft.SqlServer.Dts.Runtime.Wrapper;
[Microsoft.SqlServer.Dts.Pipeline.SSISScriptComponentEntryPointAttribute]
public class ScriptMain : UserComponent
private int rowCounter = 0;
// Method that will be started before the rows start to pass
public override void PreExecute()
base.PreExecute();
// Lock variable for read
VariableDispenser variableDispenser = (VariableDispenser)this.VariableDispenser;
variableDispenser.LockForRead("User::MaxID");
IDTSVariables100 vars;
variableDispenser.GetVariables(out vars);
// Fill the internal variable with the value of the SSIS variable
rowCounter = (int)vars["User::MaxID"].Value;
// Unlock variable
vars.Unlock();
// Method that will be started for each record in you dataflow
public override void Input0_ProcessInputRow(Input0Buffer Row)
// Seed counter
rowCounter++;
// Fill the new column
Row.MaxID = rowCounter;
Here is a script to get an incremental ID. On the ReadWriteVariables of the script add the "User::MaxID" variables to get the last number. On the Inputs and Outputs tab, create an output column here in the code it's MaxID numeric data types. -
I am trying to integrate simulink model (.mdl) file with SIT of Labview for RCP and HIL purpose. I am using Labview 8.6, Simulink 6.6 with RTW 6.6 and RTW embedded coder 4.6 ,Visual C Express 2008 and Visual C++ Express 2008. I have selected system target file as nidll.tlc,make command as make_rtw and template nidll_vs.tmf. When I try to generate .dll file I get the following error.
Attachments:
SITProblem.JPG 101 KBHi,
No . I could not solve the issue. Presently we are using microautobox (from Dspace)for doing the RCP.
Himadri -
Getting Page not found while trying to read ws-addressing.xsd file in IE
Hi All,
Here I am creating dynamic partner link to call OSB service in my BPEL process. I wan make it dynamic to change run time server address and port numbers. Here I am getting Error (Error occurred reading inline schemas) while creating reference variable as per BPEL cook book. To resolve that exception, I have created /bpel/system/xmllib/ folders in SOA 11g and placed ws-addressing.xsd file. After I am trying to import in BPEL process, still I am getting same Exception.
Even I tried to read ws-addressing.xsd file through IE browser (http://Host:Port/orabpel/xmllib/ws-addressing.xsd) after placing that directory(/bpel/system/xmllib/). My SOA server and everything is running.
Thanks in advance.
mallyPaul,
You can try the suggestion in this thread on the XE forum:
Re: Problem with importing HTML DB applications
It also shows how to turn on logging in the XE web server. -
I cannot read Nikon D600 raw files with CS5
I cannot read Nikon D600 raw files with CS5, I have tried Camera Raw 6.1 update but that has not worked.
Nikon D 600 will work with ACR 7.3 which needs minimum of PS CS 6 : http://helpx.adobe.com/creative-suite/kb/camera-raw-plug-supported-cameras.html
With PS CS 5, ACR 7.3 will not work.
So either you need to upgrade to CS 6 or use DNG converter to convert images to DNG format which you can open in CS 5 -
How to read appended objects from file with ObjectInputStream?
Hi to everyone. I'm new to Java so my question may look really stupid to most of you but I couldn't fined a solution by myself... I wanted to make an application, something like address book that is storing information about different people. So I decided to make a class that will hold the information for each person (for example: nickname, name, e-mail, web address and so on), then using the ObjectOutputStream the information will be save to a file. If I want to add a new record for a new person I'll simply append it to the already existing file. So far so good but soon I discovered that I can not read the appended objects using ObjectInputStream.
What I mean is that if I create new file and then in one session save several objects to it using ObjectOutputStream they all will be read with no problem by ObjectInputStream. But after that if in a new session I append new objects they won't be read. The ObjectInputStream will read the objects from the first session after that IOException will be generated and the reading will stop just before the appended objects from the second session.
The following is just a simple test it's not actual code from the program I was talking about. Instead of objects containing different kind of information I'm using only strings here. To use the program use as arguments in the console "w" to create new file followed by the file name and the strings you want save to the file (as objects). Example: "+w TestFile.obj Thats Just A Test+". Then to read it use "r" (for reading), followed by the file name. Example "+r TestFile.obj+". As a result you'll see that all the strings that are saved in the file can be successfully read back. Then do the same: "+w TestFile.obj Thats Second Test+" and then read again "+r TestFile.obj+". What will happen is that the strings only from the first sessions will be read and the ones from the second session will not.
I am sorry for making this that long but I couldn't explain it more simple. If someone can give me a solution I'll be happy to hear it! ^.^ I'll also be glad if someone propose different approach of the problem! Here is the code:
import java.io.*;
class Fio
public static void main(String[] args)
try
if (args[0].equals("w"))
FileOutputStream fos = new FileOutputStream(args[1], true);
ObjectOutputStream oos = new ObjectOutputStream(fos);
for (int i = 2; i < args.length ; i++)
oos.writeObject(args);
fos.close();
else if (args[0].equals("r"))
FileInputStream fis = new FileInputStream(args[1]);
ObjectInputStream ois = new ObjectInputStream(fis);
for (int i = 0; i < fis.available(); i++)
System.out.println((String)ois.readObject());
fis.close();
else
System.out.println("Wrong args!");
catch (IndexOutOfBoundsException exc)
System.out.println("You must use \"w\" or \"r\" followed by the file name as args!");
catch (IOException exc)
System.out.println("I/O exception appeard!");
catch (ClassNotFoundException exc)
System.out.println("Can not find the needed class");How to read appended objects from file with ObjectInputStream? The short answer is you can't.
The long answer is you can if you put some work into it. The general outline would be to create a file with a format that will allow the storage of multiple streams within it. If you use a RandomAccessFile, you can create a header containing the length. If you use streams, you'll have to use a block protocol. The reason for this is that I don't think ObjectInputStream is guaranteed to read the same number of bytes ObjectOutputStream writes to it (e.g., it could skip ending padding or such).
Next, you'll need to create an object that can return more InputStream objects, one per stream written to the file.
Not trivial, but that's how you'd do it. -
How do I merge 6 older saved json files with bookmarks in FF 23.0.1? Thanks
I have 6 old json files from discarded computers to merge with current bookmarks in ff 23.0.1. Looking for a utility that will combine all into one bookmark folder. Thanks in advance.
Application Basics
Name
Firefox
Version
23.0.1
User Agent
Mozilla/5.0 (Windows NT 5.1; rv:23.0) Gecko/20100101 Firefox/23.0
Build Configuration
about:buildconfig
Extensions
Name
Version
Enabled
ID
Adblock Plus
2.3.2
true
{d10d0bf8-f5b5-c8b4-a8b2-2b9879e08c5d}
Anti-Banner
13.0.2.600
true
[email protected]
LinkmanFox
8.80.0.0
true
{A81031F3-6CEE-4A19-809F-4E26C1D9C1D1}
Advanced SystemCare Surfing Protection
1.0
false
[email protected]
Content Blocker
13.0.2.600
false
[email protected]
Kaspersky URL Advisor
13.0.2.600
false
[email protected]
Microsoft .NET Framework Assistant
0.0.0
false
{20a82645-c095-46ed-80e3-08825760534b}
Safe Money
13.0.2.600
false
[email protected]
Safe Monitor
2.6.22
false
[email protected]
Virtual Keyboard
13.0.2.600
false
[email protected]
Important Modified Preferences
Name
Value
accessibility.typeaheadfind.flashBar
0
browser.cache.disk.capacity
358400
browser.cache.disk.smart_size.first_run
false
browser.cache.disk.smart_size.use_old_max
false
browser.cache.disk.smart_size_cached_value
358400
browser.places.smartBookmarksVersion
4
browser.search.useDBForOrder
true
browser.startup.homepage_override.buildID
20130814063812
browser.startup.homepage_override.mstone
23.0.1
dom.mozApps.used
true
dom.w3c_touch_events.expose
false
extensions.lastAppVersion
23.0.1
network.cookie.prefsMigrated
true
places.database.lastMaintenance
1376975654
places.history.expiration.transient_current_max_pages
50286
plugin.disable_full_page_plugin_for_types
application/pdf
plugin.importedState
true
privacy.sanitize.migrateFx3Prefs
true
security.disable_button.openCertManager
false
security.OCSP.disable_button.managecrl
false
security.OCSP.enabled
0
storage.vacuum.last.index
0
storage.vacuum.last.places.sqlite
1376975858
user.js Preferences
Your profile folder contains a
user.js file, which includes preferences that were not created by Firefox.
Graphics
Adapter Description
ATI Radeon Xpress 1150 Series
Adapter Drivers
ati2dvag
Adapter RAM
Unknown
Device ID
0x5974
Direct2D Enabled
Blocked for your graphics driver version. Try updating your graphics driver to version 10.6 or newer.
DirectWrite Enabled
false (0.0.0.0)
Driver Date
7-22-2006
Driver Version
8.261.4.0
GPU #2 Active
false
GPU Accelerated Windows
0/1 Basic Blocked for your graphics driver version. Try updating your graphics driver to version 9.6 or newer.
Vendor ID
0x1002
WebGL Renderer
Blocked for your graphics driver version. Try updating your graphics driver to version 9.6 or newer.
AzureCanvasBackend
skia
AzureContentBackend
none
AzureFallbackCanvasBackend
cairo
JavaScript
Incremental GC
true
Accessibility
Activated
false
Prevent Accessibility
0
Library Versions
Expected minimum version
Version in use
NSPR
4.10
4.10
NSS
3.15 Basic ECC
3.15 Basic ECC
NSSSMIME
3.15 Basic ECC
3.15 Basic ECC
NSSSSL
3.15 Basic ECC
3.15 Basic ECC
NSSUTIL
3.15
3.15By the way, to spell out a previously mentioned approach to gathering all your bookmarks from the old JSON files (https://support.mozilla.org/en-US/questions/968777#answer-470541):
(1) Create a new blank profile
(2) Restore a JSON file (see [[Restore bookmarks from backup or move them to another computer]])
(3) Export the restored bookmarks to an HTML format in a convenient location, with a unique file name (see [[Export Firefox bookmarks to an HTML file to back up or transfer bookmarks]])
(4) Repeat Steps 2 and 3 until you have HTML format exports of all the old bookmarks
(5) Exit Firefox and start up in your normal profile
(6) Import all the HTML files (see [[Import Bookmarks from a HTML file]])
Likely, you will have lots of duplicates, but you will have everything in one place at least. -
Parsing BLOB (CSV file with special characters) into table
Hello everyone,
In my application, user uploads a CSV file (it is stored as BLOB), which is later read and parsed into table. The parsing engine is shown bellow...
The problem is, that it won't read national characters as Ö, Ü etc., they simply dissapear.
Is there any CSV parser that supports national characters? Or, said in other words - is it possible to read BLOB by characters (where characters can be Ö, Ü etc.)?
Regards,
Adam
|
| helper function for csv parsing
|
+-----------------------------------------------*/
FUNCTION hex_to_decimal(p_hex_str in varchar2) return number
--this function is based on one by Connor McDonald
--http://www.jlcomp.demon.co.uk/faq/base_convert.html
is
v_dec number;
v_hex varchar2(16) := '0123456789ABCDEF';
begin
v_dec := 0;
for indx in 1 .. length(p_hex_str) loop
v_dec := v_dec * 16 + instr(v_hex, upper(substr(p_hex_str, indx, 1))) - 1;
end loop;
return v_dec;
end hex_to_decimal;
|
| csv parsing
|
+-----------------------------------------------*/
FUNCTION parse_csv_to_imp_table(in_import_id in number) RETURN boolean IS
PRAGMA autonomous_transaction;
v_blob_data BLOB;
n_blob_len NUMBER;
v_entity_name VARCHAR2(100);
n_skip_rows INTEGER;
n_columns INTEGER;
n_col INTEGER := 0;
n_position NUMBER;
v_raw_chunk RAW(10000);
v_char CHAR(1);
c_chunk_len number := 1;
v_line VARCHAR2(32767) := NULL;
n_rows number := 0;
n_temp number;
BEGIN
-- shortened
n_blob_len := dbms_lob.getlength(v_blob_data);
n_position := 1;
-- Read and convert binary to char
WHILE (n_position <= n_blob_len) LOOP
v_raw_chunk := dbms_lob.substr(v_blob_data, c_chunk_len, n_position);
v_char := chr(hex_to_decimal(rawtohex(v_raw_chunk)));
n_temp := ascii(v_char);
n_position := n_position + c_chunk_len;
-- When a whole line is retrieved
IF v_char = CHR(10) THEN
n_rows := n_rows + 1;
if n_rows > n_skip_rows then
-- Shortened
-- Perform some action with the line (store into table etc.)
end if;
-- Clear out
v_line := NULL;
n_col := 0;
ELSIF v_char != chr(10) and v_char != chr(13) THEN
v_line := v_line || v_char;
if v_char = ';' then
n_col := n_col+1;
end if;
END IF;
END LOOP;
COMMIT;
return true;
EXCEPTION
-- some exception handling
END;Uploading CSV files into LOB columns and then reading them in PL/SQL: [It’s|http://forums.oracle.com/forums/thread.jspa?messageID=3454184�] Re: Reading a Blob (CSV file) and displaying the contents Re: Associative Array and Blob Number of rows in a clob doncha know.
Anyway, it woudl help if you gave us some basic information: database version and NLS settings would seem particularly relevant here.
Cheers, APC
blog: http://radiofreetooting.blogspot.com -
How to parse a big file with Regex/Patternthan
I would parse a big file by using matcher/pattern so i have thought to use a BufferedReader.
The problem is that a BufferedReader constraints to read
the file line by line and my patterns are not only inside a line but also at the end and at the beginning of each one.
For example this class:
import java.util.regex.*;
import java.io.*;
public class Reg2 {
public static void main (String [] args) throws IOException {
File in = new File(args[1]);
BufferedReader get = new BufferedReader(new FileReader( in ));
Pattern hunter = Pattern.compile(args[0]);
String line;
int lines = 0;
int matches = 0;
System.out.print("Looking for "+args[0]);
System.out.println(" in "+args[1]);
while ((line = get.readLine()) != null) {
lines++;
Matcher fit = hunter.matcher(line);
//if (fit.matches()) {
if (fit.find()) {
System.out.println ("" + lines +": "+line);
matches++;
if (matches == 0) {
System.out.println("No matches in "+lines+" lines");
}used with the pattern "ERTA" and this file (genomic sequence) :
AAAAAAAAAAAERTAAAAAAAAAERT [end of line]
ABBBBBBBBBBBBBBBBBBBBBBERT [end of line]
ACCCCCCCCCCCCCCCCCCCCCCERT [end of line]
returns it has found the pattern only in this line
"1: AAAAAAAAAAAERTAAAAAAAAAERT"
while my pattern is present 4 times.
Is really a good idea to use a BufferedReader ?
Has someone an idea ?
thanx
Edited by: jfact on Dec 21, 2007 4:39 PM
Edited by: jfact on Dec 21, 2007 4:43 PMQuick and dirty demo:
import java.io.*;
import java.util.regex.*;
public class LineDemo {
public static void main (String[] args) throws IOException {
File in = new File("test.txt");
BufferedReader get = new BufferedReader(new FileReader(in));
int found = 0;
String previous = "", next = "", lookingFor = "ERTA";
Pattern p = Pattern.compile(lookingFor);
while((next = get.readLine()) != null) {
String toInspect = previous+next;
Matcher m = p.matcher(toInspect);
while(m.find()) found++;
previous = next.substring(next.length()-lookingFor.length());
System.out.println("Found '"+lookingFor+"' "+found+" times.");
/* test.txt contains these four lines:
AAAAAAAAAAAERTAAAAAAAAAERT
ABBBBBBBBBBBBBBBBBBBBBBERT
ACCCCCCCCCCCCCCCCCCCCCCERT
ACCCCCCCCCCCCCCCCCCCCCCBBB
*/ -
Can't read PageMaker 6.5 files with InDesign CS3
I have a lot of old PageMaker 6.5 files that need to be upgraded to InDesign.
When I try to open them in InDesign CS3, I get an error that says:
Cannot open the file "XYZ". Adobe InDesign may not support the file format, a plug-in that supports the file format may be missing, or the file may be open in another application.
I get this error on a Mac Pro G5 (PowerPC), running OS X 10.5.8.
What's interesting is that when I try to open the same files on other computers on the network, it works just fine. The other machines are all Intel Macs, running OS X 10.6.8 or greater. The files are on a Mac Server.
Can this problem be fixed on the Mac G5, maybe by installing a plugin, or is there no hope?
Thanks!Interesting thought, but I suspect the file is actually a training manual and that's the real name of the file, with no extension at all. Pagemaker 6.5 files would have had a .p65 extension, I believe.
-
How to cache json files with dispatcher?
Our project is unique in that we want to cache particular json files. Our performance will rely on it. However it seems dispatcher is only capable of caching HTML. This will be real bad news for us if it's impossible.
In our /cache configuration, we'd like to do:
/rules
/0000
/glob "*.html"
/type "allow"
/0001
/glob "*.docache.json"
/type "allow"
Unfortunately, after requesting the json file through the dispatcher, the file is not cached at all, even if doing /glob "*" /type "allow"Depends what you mean by clear out. Dispatcher does two things on activation request: invalidate and evict.
Invalidation:
find the /invalidate section of your dispatcher.any and add :
/0001
/glob "*.json"
/type "allow"
Obviously you'll need to change the numeric ID of the rule from 0001 to whatever makes sense for you.
Eviction (deletion):
If /mypath/mypage is being activated, then only files meeting globbing pattern /mypath/mypage.* will be deleted, such as /mypath/mypage.html and /mypath/mypage.json... however, child directories WILL NOT be evicted, such as /mypath/mypage/jcr_content/parsys/mycomponent.json
Bang - buncha info for yah. -
I am trying to read in a .txt file, but I have a FileNotFound Exception
So I am trying to read in as5.txt. It is located in the Assignment5 folder. I probably just have the syntax wrong, but can someone help me?
import java.util.Scanner;
import javax.swing.JFrame;
import java.awt.Color;
import java.io.*;
public class Driver {
public static void main(String [] args){
JFrame window = new JFrame ("Window");//This creates the window
window.setBounds(30, 100, 700, 700);
window.setVisible(true);
window.setLayout(null);
FileReader as5 = new FileReader("Assignment5.as5.txt");
Scanner file = new Scanner(as5);
GameSquare square = new GameSquare(window, 0, 0);
GameSquare[][] board = new GameSquare[8][8];
int x = 0;
int y = 0;
for(int i=0;i < 8; i++){
for(int j=0;j < 8; j++){
board[i][j] = new GameSquare(window, x, y);
x=x+80;
x=0;
y=y+80;
}If you think a file doesn't exists when you think it should, you can use code like this to print out what files are there:
import java.io.*;
public class Periscope {
public static void check(File file) {
if (file.exists()) {
System.out.println("file exists: " + getPath(file));
System.out.println();
System.out.println("DUMP:");
System.out.println();
dump(file, "");
} else {
System.out.println("file does not exist: " + getPath(file));
goUp(file);
static void goUp(File file) {
File parent = file.getAbsoluteFile().getParentFile();
if (parent == null) {
System.out.println("file does not have a parent: " + getPath(file));
} else {
check(parent);
static void dump(File file, String indent) {
System.out.println(indent + getPath(file));
File[] children = file.listFiles();
if (children != null) {
indent += " ";
for(File child : children) {
dump(child, indent);
static String getPath(File file) {
try {
return file.getCanonicalPath();
} catch (IOException e) {
e.printStackTrace();
return file.getName();
public static void main(String[] args) {
check(new File("foo.bar"));
} -
Oops. Can I read my byte stream files with missing headers?
My fault, I know, but I wrote out an 1D array of numbers to a file using the simple Write File vi, not realising that a header would have been a good idea at the time. Now I can't read them back in because LabVIEW doesn't know what they are, and trying to put the original array element (Dbl Precision Number) on the Byte Stream Type of Read File doesn't work.
So, is there any way that I can insert an appropriate header back into the beginning of the file so that I can then read them in successfully?
(No, unfortunately I can't regenerate the data. Believe me, I would if I could!)
Thanks,
RiggyI suppose you used the Write File function from the File I/O pallette since there is no Write File VI?
If that is the case your file should be exactly 8*N bytes long, if the array had N elements of type DBL.
You can simply read back you data with the 'Read characters from File' VI (from the File I/O pallette) and then typecast the read string to a 1D array of DBL. (typecast is the first icon in the 'Advanced--Data Manipulation' pallette.
-Franz
Maybe you are looking for
-
Hello there - I am sharing an iPhoto library across two accounts on the same computer - it works fine EXCEPT for Quicktime movies - they play on one account and claim I don't have the rights on the other - all permissions are on and ok?
-
Hello Experts, I have some doubts in Treasury of FICO. Can some one explain me clearly. 1) What are the topics that come under Treasury? 2) What is Value Date in Cash Position? 3) What is Memo Records in Cash Position? 4) What is Check Deposit in Cas
-
Change log of attachment in Services for object in PO
Dear All, I am unable to find the history of changes made to a file attached in a PO in the services for object button. Like other changes in header and items of a PO is available is this also available and if so how to get the details of changes of
-
Difference between standard selection screen and selection screen
hi guyz, im asking this because in the table D010SINF there are two fields sdate and idate which has different dates in it for some programs and same dates for some..bit confused:( thanks
-
Connection ProbT Hello I have bought the Zen V 2Gb player , I got it fully charged but the comp wont connect to it Recovery mode didn't help, I checked the FAQ here but all of the links inside leads to an error TheDevice Manager shows that yellow exc