Hierarchical data structure
I am trying to represent the following data structure in hierarchical format ---- but I am not going to use any swing components, so jtree and such are out, and xml is probably out. I was hoping some form of collection would work but I can't seem to get it!
Example Scenario
Football League --- Football Team -- Player Name
West
------------------------------Chiefs
-------------------------------------------------------------xyz
-------------------------------------------------------------abc
-------------------------------------------------------------mno
------------------------------Broncos
------------------------------------------------------------asq
------------------------------------------------------------daff
This hierarchical structure has a couple of layers, so I don't know how I can feasibly do it. I have tried to look at making hashmaps on top of each other so that as I iterate thru the data, I can check for the existence of a key, and if it exists, get the key and add to it.
Does anyone know a good way to do this? Code samples would be appreciated!!!
Thank you!
Hi Jason,
I guess you wouldn't want to use Swing components or JTree unless your app has some GUI and even then you would want to look for some other structure than say JTree to represent your data.
You have got plenty options one is that of using nested HashMaps. You could just as well use nested Lists or Arrays or custom objects that represent your data structure.
I don't know why you should exclude XML. There is the question anyway how you get your data in your application. Is a database the source or a text file? Why not use XML since your data seems to have a tree structure anyway and XML seems to fit the bill.
An issue to consider in that case is the amount of data. Large XML files have performance problems associated with them.
In terms of a nice design I would probably do something like this (assuming the structure of your data is fixed):
public class Leagues {
private List leagues = new ArrayList();
public FootballLeague getLeagueByIndex(int index) {
return (FootballLeague)leagues.get(index);
public FootballLeague getLeagueByName(String name) {
// code that runs through the league list picking out the league with the given name
public void addLeague(FootballLeague l) {
leagues.add( l );
}Next you define a class called FootballLeague:
public class FootballLeague {
private List teams = new ArrayList();
private String leagueName;
public FootballTeam getTeamByIndex(int index) {
return (FootballTeam)teams.get( index );
public FootballTeam getTeamByName(String name) {
// code that runs through the team list picking out the team with the given name
public void addTeam(FootballTeam t) {
teams.add( t );
public void setTeamName(String newName) {
this.name = newName;
public String getTeamName() {
return this.name;
}Obviously you will continue defining classes for Players next following that pattern. I usually apply that kind of structure for complex hierarchical data. Nested lists would be just as fine, but dealing with nested lists rather than a simple API for you data structures can be a pain (especially if you have many levels in your hierarchy);
Hope that helps.
The Dude
Similar Messages
-
Wanna learn to implement hierarchical data structure
I want to learn the method of handling hierarchical data in Java
For instance if there is some kind of data which contains 6 main nodes then every node contains 2 sub nodes and there are 4 nodes under the 3rd node where as the 5th one contains two more subnodes one under another.
So how will that be implemented?
Ofcourse it must be possible to implement it but how can I do the same if I do not know the depth and number of nodes and will get it during the runtime?
I had attempted to do create some thing of this kind using Turbo C++ 3.5 but after two weeks of intensive programming I was left utterly confused with innumerable pointers and pointer to pointers and pointer to a pointer to a pointers and more. At last it was me who forgot which pointer was pointing to what.Well, just start by making a Node class. To allow Nodes to have children, make each Node have an array (or arraylist, vector, etc.) of other Nodes.
for example:
class Node{
private ArrayList<Node> children;
}Put whatever else you need in there.
You can then traverse these through methods you write, to return child nodes. If you need the Nodes to have knowledge of their parents, add a Node parent; variable in your Node class.
Essentially, keep things as simple as possible, and this will allow you to write cleaner code and also decide on the depth of the structure at runtime, like you describe. -
Updating a hierarchical data structure from an entry processor
I have a tree-like data structure that I am attempting to update from an AbstractProcessor.
Imagine that one value is a collection of child value keys, and I want to add a new child node in the tree. This requires updating the parent node (which contains the list of child nodes), and adding the child value which is a separate entry.
I would rather not combine all bits of data into one value (which could make for a large serialized object), as sometimes I prefer to access (read-only) the child values directly. The child and the parent values live in the same partition in the partitioned cache, though, so get access should be local.
However, I am attempting to call put() on the same cache to add a child value which is apparently disallowed. It makes sense that a blocking call is involved in this operation, as it needs to push out this data to the cluster member that has the backup value for the same operation, but is there a general problem with performing any kind of re-entrant work on Coherence caches from an entry processor for any value that is not the value you are processing? I get the assertion below.
I am fine with the context blocking (preventing reads or writes on the parent node value) until the child completes, presuming that I handle deadlock prevention myself due to the order in which values are accessed.
Is there any way to do this, either with entry processors or not? My code previously used lock, get and put to operate on the tree (which worked), but I am trying to convert this code to use entry processors to be more efficient.
2008-12-05 16:05:34.450 (ERROR)[Coherence/Logger@9219882 3.4/405]: Assertion failed: poll() is a blocking call and cannot be called on the Service thread
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.poll(Grid.CDB:4)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.poll(Grid.CDB:11)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.put(DistributedCache.CDB:30)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.put(DistributedCache.CDB:1)
at com.tangosol.util.ConverterCollections$ConverterCacheMap.put(ConverterCollections.java:2433)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$ViewMap.put(DistributedCache.CDB:1)
at com.tangosol.coherence.component.util.SafeNamedCache.put(SafeNamedCache.CDB:1)
at com.tangosol.net.cache.CachingMap.put(CachingMap.java:928)
at com.tangosol.net.cache.CachingMap.put(CachingMap.java:887)
at com.tangosol.net.cache.NearCache.put(NearCache.java:286)
at com.conduit.server.properties.CLDistributedPropertiesManager$UpdatePropertiesProcessor.process(CLDistributedPropertiesManager.java:249)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$Storage.invoke(DistributedCache.CDB:20)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache.onInvokeRequest(DistributedCache.CDB:50)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$InvokeRequest.run(DistributedCache.CDB:1)
at com.tangosol.coherence.component.net.message.requestMessage.DistributedCacheKeyRequest.onReceived(DistributedCacheKeyRequest.CDB:12)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onMessage(Grid.CDB:9)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:130)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache.onNotify(DistributedCache.CDB:3)
at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:37)
at java.lang.Thread.run(Thread.java:637)Hi,
reentrant calls to the same Coherence service is very much recommended against.
For more about it, please look at the following Wiki page:
http://wiki.tangosol.com/display/COH34UG/Constraints+on+Re-entrant+Calls
Best regards,
Robert -
Hierarchical data structures (in a single table)
Hi,
If I have a hierarchy of objects stored in a table -
ORG_UNIT
ID
PARENT_ID
NAME
And the JDO mapping for an OrgUnit contains a parent OrgUnit and a
Collection of children.
Is there an efficient way of pulling them out of the database.
It is currently loading each individual parent's kids.
This is going to be pretty slow if there are say 500 OrgUnits in the
database.
If it would be better to pull them all out and build the hierarchy up in
code (as it was being done in straight JDBC). How can I efficiently obtain
the parent or children without doing exactly the same?
Thanks,
SimonSimon,
There will be no db access for every child - you will read all child records
for a particular parent at once when you try to access its child collection.
Granted that for terminal leaves you will get an db access to load an empty
collection so effectively you will get a db access per node. If your goal is
always to load and traverse entire tree it will be expensive. But the
beauty of hierarchical structures is that while they could be huge millions
of nodes you do not need to load it all to navigate - just the path you
need. This is where lazy loading excels so overall on large trees you will
be much better of not loading whole thing at once. However if you still want
to do it nothing prevents you from not having persistent collection of child
records in OrgUnit class at all but only a reference to a parent, load
entire table using query and then build tree in memory yourself as you
iterate over the query resultset. You can probably even do it in a single
iteration over the resultset. I would never do it myself though . In my
opinion it defeats ease of use and cleanness of your object model.
Alex
"Simon Horne" <[email protected]> wrote in message
news:ag1p9p$9si$[email protected]..
Hi,
If I have a hierarchy of objects stored in a table -
ORG_UNIT
ID
PARENT_ID
NAME
And the JDO mapping for an OrgUnit contains a parent OrgUnit and a
Collection of children.
Is there an efficient way of pulling them out of the database.
It is currently loading each individual parent's kids.
This is going to be pretty slow if there are say 500 OrgUnits in the
database.
If it would be better to pull them all out and build the hierarchy up in
code (as it was being done in straight JDBC). How can I efficiently obtain
the parent or children without doing exactly the same?
Thanks,
Simon -
Dictionary Structure---data structure of Dictionary
I don't know how to build a data structure of dictionary...
somebody said it's was built by binary tree. but I don't know how.
somebody helps me????
thanks for reading my topicA dictionary is not a tree/hierarchical structure. Have you ever opened a dictionary before?! If you have even once, you already know the structure of a dictionary. Now, how to build that in Java is another question. I'm not sure of your requirements but you could start by creating an object that represents a single term, it's definitions, usages, etc. Then you can create a List of them (and sort them alphabetically). Simple enough?
-
What is the best way to explore a hierarchical folder structure?
Hallo,
I need to access and navigate a hierarchical folder structure hosted in a MS SQL Server database. In particular there is a root folder containing several folders. Each child-folder contains further nested folders or documents.
For each item I need to retrieve the folder's (name, path, etc) and the documents (title, author, etc.) details that are retrievable from the DB fields. Afterwards I will use these data to create a semantic web ontology using Jena API.
My question was about which is the best way to proceed.
A collegue of mine suggested to use the "WITH" command of SQL Server to create and use a link list to navigate easily the structure, executing just one query rather than several (one for each level of the nested loops). However in this way the solution will work only with the SMQ Server database, while my goal is to achieve a more general solution.
May someone help me?
Thank you in advance,
FrancescoMy goal is to create a documents library ontology achieving, from each element of the hierarchy (folder or document), some data (title, parent, etc.) and use them to "label" the ontology resources.
I will use a little of both approches in the following way:
1) I make just ONE query on folder table to get, from each folder, its path (eg. root/fold1/fold2/doc1.pdf), its ID and ParentID and ONE on the Documents table to get the containerID, title, etc.
2) I create as many Folder objects as the retrieved records and an HashTable, where the KEY = Folder.ParentID value and the VALUE = Vector<Folder>. I add then each object to the Vector relative to the same ParentID. In this way I have an Vector containing all the folders child of the same parent folder and I do the same for an HashTable keeping the documents contained in a specific folder.
3)I extract from the HashTable the root folder (whose ParentID is always "0" and whose ID is "1") than it is invoked the method appendChild() (see code)
public static void appendChild(int ID, Resource RES)
Vector<Folder> currFold = table.get(ID);
for(int i=0; i<currFold.size(); i++)
//Extract the child and crate the relative resource
if(table.containsKey(currFold.getID()))
appendChild(currFold[i].getID(), Resource newRES);
In this way I go in depth in the hirarchical structure using a "left most" procedure. I made a test and the output is correct. However, such an approch must be done for about 4 level depth folders (around 30 in total) containing also documents and creating the documents library of a Project. Then I must process around 20 project to achieve for all of them such a documents library representation.
By the way, I do not have to mantein the HashTable content after I created the docs library ontology. Hence I use just one hashTable for ALL the projects and I flush it after I finish to do the loop for one project in order to save resources.
My question is: is right my approach or might I improve it in some way?
Thank you for every suggesion/comment.
Francesco
Edited by: paquito81 on May 27, 2008 8:15 AM -
Hierarchical treeview structure in Reports
Hi,
I am working with an MNC, can anybody help me out by valuable guidelines in doing an hierarchical treeview structure in Reports.
Your early response would b helpful to me.
Bye
PavanHI
see this site you wil find lots of examples
http://www.sapdev.co.uk/reporting/alv/alvtree.htm
The ALV tree report produces uses OBJECT METHOD functionality in-order to produce a
tree structured ALV output.
The creation of an ALVtree report first requires the creation of a simple program to build the ALV
details such as the fieldcatalog and to call a screen which will be used to display the ALVTree.
The screen should be created with a 'custom control' where you wish the ALVtree report to appear.
For the following example it will have the name 'SCREEN_CONTAINER'.
<b>Creation of Main Program code, Data declaration and screen call</b>
*& Report ZDEMO_ALVTREE *
*& Example of a simple ALV Grid Report *
*& The basic requirement for this demo is to display a number of *
*& fields from the EKPO and EKKO table in a tree structure. *
Amendment History *
REPORT zdemo_alvgrid .
*Data Declaration
TABLES: ekko.
TYPE-POOLS: slis. "ALV Declarations
TYPES: BEGIN OF t_ekko,
ebeln TYPE ekpo-ebeln,
ebelp TYPE ekpo-ebelp,
statu TYPE ekpo-statu,
aedat TYPE ekpo-aedat,
matnr TYPE ekpo-matnr,
menge TYPE ekpo-menge,
meins TYPE ekpo-meins,
netpr TYPE ekpo-netpr,
peinh TYPE ekpo-peinh,
END OF t_ekko.
DATA: it_ekko TYPE STANDARD TABLE OF t_ekko INITIAL SIZE 0,
it_ekpo TYPE STANDARD TABLE OF t_ekko INITIAL SIZE 0,
it_emptytab TYPE STANDARD TABLE OF t_ekko INITIAL SIZE 0,
wa_ekko TYPE t_ekko,
wa_ekpo TYPE t_ekko.
DATA: ok_code like sy-ucomm, "OK-Code
save_ok like sy-ucomm.
*ALV data declarations
DATA: fieldcatalog TYPE lvc_t_fcat WITH HEADER LINE.
DATA: gd_fieldcat TYPE lvc_t_fcat,
gd_tab_group TYPE slis_t_sp_group_alv,
gd_layout TYPE slis_layout_alv.
*ALVtree data declarations
CLASS cl_gui_column_tree DEFINITION LOAD.
CLASS cl_gui_cfw DEFINITION LOAD.
DATA: gd_tree TYPE REF TO cl_gui_alv_tree,
gd_hierarchy_header TYPE treev_hhdr,
gd_report_title TYPE slis_t_listheader,
gd_logo TYPE sdydo_value,
gd_variant TYPE disvariant.
*Create container for alv-tree
DATA: l_tree_container_name(30) TYPE c,
l_custom_container TYPE REF TO cl_gui_custom_container.
*Includes
*INCLUDE ZDEMO_ALVTREEO01. "Screen PBO Modules
*INCLUDE ZDEMO_ALVTREEI01. "Screen PAI Modules
*INCLUDE ZDEMO_ALVTREEF01. "ABAP Subroutines(FORMS)
*Start-of-selection.
START-OF-SELECTION.
ALVtree setup data
PERFORM data_retrieval.
PERFORM build_fieldcatalog.
PERFORM build_layout.
PERFORM build_hierarchy_header CHANGING gd_hierarchy_header.
PERFORM build_report_title USING gd_report_title gd_logo.
PERFORM build_variant.
Display ALVtree report
call screen 100.
*& Form DATA_RETRIEVAL
Retrieve data into Internal tables
FORM data_retrieval.
SELECT ebeln
UP TO 10 ROWS
FROM ekko
INTO corresponding fields of TABLE it_ekko.
loop at it_ekko into wa_ekko.
SELECT ebeln ebelp statu aedat matnr menge meins netpr peinh
FROM ekpo
appending TABLE it_ekpo
where ebeln eq wa_ekko-ebeln.
endloop.
ENDFORM. " DATA_RETRIEVAL
*& Form BUILD_FIELDCATALOG
Build Fieldcatalog for ALV Report
FORM build_fieldcatalog.
Please not there are a number of differences between the structure of
ALVtree fieldcatalogs and ALVgrid fieldcatalogs.
For example the field seltext_m is replace by scrtext_m in ALVtree.
fieldcatalog-fieldname = 'EBELN'. "Field name in itab
fieldcatalog-scrtext_m = 'Purchase Order'. "Column text
fieldcatalog-col_pos = 0. "Column position
fieldcatalog-outputlen = 15. "Column width
fieldcatalog-emphasize = 'X'. "Emphasize (X or SPACE)
fieldcatalog-key = 'X'. "Key Field? (X or SPACE)
fieldcatalog-do_sum = 'X'. "Sum Column?
fieldcatalog-no_zero = 'X'. "Don't display if zero
APPEND fieldcatalog TO gd_fieldcat.
CLEAR fieldcatalog.
fieldcatalog-fieldname = 'EBELP'.
fieldcatalog-scrtext_m = 'PO Iten'.
fieldcatalog-outputlen = 15.
fieldcatalog-col_pos = 1.
APPEND fieldcatalog TO gd_fieldcat..
CLEAR fieldcatalog.
fieldcatalog-fieldname = 'STATU'.
fieldcatalog-scrtext_m = 'Status'.
fieldcatalog-outputlen = 15.
fieldcatalog-col_pos = 2.
APPEND fieldcatalog TO gd_fieldcat..
CLEAR fieldcatalog.
fieldcatalog-fieldname = 'AEDAT'.
fieldcatalog-scrtext_m = 'Item change date'.
fieldcatalog-outputlen = 15.
fieldcatalog-col_pos = 3.
APPEND fieldcatalog TO gd_fieldcat..
CLEAR fieldcatalog.
fieldcatalog-fieldname = 'MATNR'.
fieldcatalog-scrtext_m = 'Material Number'.
fieldcatalog-outputlen = 15.
fieldcatalog-col_pos = 4.
APPEND fieldcatalog TO gd_fieldcat..
CLEAR fieldcatalog.
fieldcatalog-fieldname = 'MENGE'.
fieldcatalog-scrtext_m = 'PO quantity'.
fieldcatalog-outputlen = 15.
fieldcatalog-col_pos = 5.
APPEND fieldcatalog TO gd_fieldcat..
CLEAR fieldcatalog.
fieldcatalog-fieldname = 'MEINS'.
fieldcatalog-scrtext_m = 'Order Unit'.
fieldcatalog-outputlen = 15.
fieldcatalog-col_pos = 6.
APPEND fieldcatalog TO gd_fieldcat..
CLEAR fieldcatalog.
fieldcatalog-fieldname = 'NETPR'.
fieldcatalog-scrtext_m = 'Net Price'.
fieldcatalog-outputlen = 15.
fieldcatalog-col_pos = 7.
fieldcatalog-datatype = 'CURR'.
APPEND fieldcatalog TO gd_fieldcat..
CLEAR fieldcatalog.
fieldcatalog-fieldname = 'PEINH'.
fieldcatalog-scrtext_m = 'Price Unit'.
fieldcatalog-outputlen = 15.
fieldcatalog-col_pos = 8.
APPEND fieldcatalog TO gd_fieldcat..
CLEAR fieldcatalog.
ENDFORM. " BUILD_FIELDCATALOG
*& Form BUILD_LAYOUT
Build layout for ALV grid report
FORM build_layout.
gd_layout-no_input = 'X'.
gd_layout-colwidth_optimize = 'X'.
gd_layout-totals_text = 'Totals'(201).
gd_layout-totals_only = 'X'.
gd_layout-f2code = 'DISP'. "Sets fcode for when double
"click(press f2)
gd_layout-zebra = 'X'.
gd_layout-group_change_edit = 'X'.
gd_layout-header_text = 'helllllo'.
ENDFORM. " BUILD_LAYOUT
*& Form build_hierarchy_header
build hierarchy-header-information
-->P_L_HIERARCHY_HEADER structure for hierarchy-header
FORM build_hierarchy_header CHANGING
p_hierarchy_header TYPE treev_hhdr.
p_hierarchy_header-heading = 'Hierarchy Header'(013).
p_hierarchy_header-tooltip = 'This is the Hierarchy Header !'(014).
p_hierarchy_header-width = 30.
p_hierarchy_header-width_pix = ''.
ENDFORM. " build_hierarchy_header
*& Form BUILD_REPORT_TITLE
Build table for ALVtree header
<-> p1 Header details
<-> p2 Logo value
FORM build_report_title CHANGING
pt_report_title TYPE slis_t_listheader
pa_logo TYPE sdydo_value.
DATA: ls_line TYPE slis_listheader,
ld_date(10) TYPE c.
List Heading Line(TYPE H)
CLEAR ls_line.
ls_line-typ = 'H'.
ls_line-key "Not Used For This Type(H)
ls_line-info = 'PO ALVTree Display'.
APPEND ls_line TO pt_report_title.
Status Line(TYPE S)
ld_date(2) = sy-datum+6(2).
ld_date+2(1) = '/'.
ld_date3(2) = sy-datum4(2).
ld_date+5(1) = '/'.
ld_date+6(4) = sy-datum(4).
ls_line-typ = 'S'.
ls_line-key = 'Date'.
ls_line-info = ld_date.
APPEND ls_line TO pt_report_title.
Action Line(TYPE A)
CLEAR ls_line.
ls_line-typ = 'A'.
CONCATENATE 'Report: ' sy-repid INTO ls_line-info SEPARATED BY space.
APPEND ls_line TO pt_report_title.
ENDFORM.
*& Form BUILD_VARIANT
Build variant
form build_variant.
Set repid for storing variants
gd_variant-report = sy-repid.
endform. " BUILD_VARIANT
<b>Creation of 'INCLUDES' to store ALVtree code</b>
Three includes need to be created in-order to store the ABAP code required for the ALVtree report.
Typically these will be one for the PBO modules, one for PAI modules and one for the subroutines(FORMs):
*Includes
include zdemo_alvtreeo01. "Screen PBO Modules
include zdemo_alvtreei01. "Screen PAI Modules
include zdemo_alvtreef01. "ABAP Subroutines(FORMS)
If you are using the code provide within the ALVtree section of this web site simply create the includes by
un-commenting the 'Includes' section within the code(see below) and double clicking on the name
i.e. 'zdemo_alvtreeo01'. Obviously these can be renamed.
*Includes
*include zdemo_alvtreeo01. "Screen PBO Modules
*include zdemo_alvtreei01. "Screen PAI Modules
*include zdemo_alvtreef01. "ABAP Subroutines(FORMS)
*Start-of-selection.
start-of-selection.
<b>Create Screen along with PBO and PAI modules for screen</b>
The next step is to create screen 100, to do this double click on the '100' within the call screen
command(Call screen 100.). Enter short description and select 'Normal' as screen type.
To create the PBO and PAI modules insert that code below into the screen's flow logic. Much of this code
should automatically have been inserted during screen creation but with the module lines commented out.
Simple remove the comments and double click the module name(STATUS_0100 and
USER_COMMAND_0100) in-order to create them, this will display the perform/module creation screen.
The MODULES are usually created within two includes one ending in 'O01' for PBO modules and
one ending in 'I01' for PAI modules(See code below).
Please note in order for these includes to be displayed on the creation screen they need to have be
created along with the following lines of code added to the main prog(see previous step):
INCLUDE ZDEMO_ALVTREEO01. "Screen PBO Modules
INCLUDE ZDEMO_ALVTREEI01. "Screen PAI Modules
Otherwise use the 'New Include' entry and SAP will add the necassary line for you.
Screen flow logic code
PROCESS BEFORE OUTPUT.
MODULE STATUS_0100.
PROCESS AFTER INPUT.
MODULE USER_COMMAND_0100.
***INCLUDE Z......O01 .
*& Module STATUS_0100 OUTPUT
PBO Module
module status_0100 output.
SET PF-STATUS 'xxxxxxxx'.
SET TITLEBAR 'xxx'.
endmodule. " STATUS_0100 OUTPUT
***INCLUDE Z......I01 .
*& Module USER_COMMAND_0100 INPUT
PAI Module
module user_command_0100 input.
endmodule. " USER_COMMAND_0100 INPUT
<b>Define OK CODE(SY-UCOMM) variable</b>
In order to define the OK CODE you must fist declare a variable of type SY-UCOM and then insert this
variable into the OK code declaration within the element list (see screen shot below). If you have used
the code contained on the iwebsite the ok code should already have been declared as OK_CODE.
i.e. OK_CODE like sy-ucom.
Note: there is also a variable called SAVE_OK, it is good practice to store the returned ok code into
a work area as soon as you enter the PAI processing.
<b>Add screen control to PAI module(INCLUDE Z......I01)</b>
The following code adds simple screen control to the report and whenever the user presses the cancel,
exit or back icon they will exit from the report. It also processes the ALVtree user interactions within the
'others' case statement
INCLUDE Z......I01 *
*& Module USER_COMMAND_0100 INPUT
text
module user_command_0100 input.
DATA return TYPE REF TO cl_gui_event.
save_ok = ok_code.
case ok_code.
when 'BACK' or '%EX' or 'RW'.
Exit program
leave to screen 0.
Process ALVtree user actions
when others.
call method cl_gui_cfw=>get_current_event_object
receiving
event_object = return.
call method cl_gui_cfw=>dispatch.
endcase.
endmodule. " USER_COMMAND_0100 INPUT
<b>Create pf-status</b>
In order to created the pf-status for the screen you need to un-comment '* SET PF-STATUS 'xxxxxxxx'
and give it a name.
i.e. SET PF-STATUS 'STATUS1'.
Step 1
Now double click on 'STATUS1' in-order to create the pf-status. Enter short text, select status type as
'Online status' and click save.
Step2
You should now be presented with the status creation screen. Choose 'Adjust template' from the Extras menu
(4.6 onwards only).
Step 3
Now select 'List status' and click the green tick (see below).
Step 3
All the basic menu bars/buttons should now have been entered. Now click save then activate. The
pf-status has now been completed.
Once you have the main program code in place to call the screen which will display the
ALVtree, you now need to setup the actual ALVtree and populate it. As this is screen
based(dialog) the display coding will be performed within the PBO screen module.
Therefor you need to add the following processes to the PBO(STATUS_0100) module
of the screen.
<b>Create Custom control</b>
Via screen painter insert 'custom control' on to screen and give it the name 'SCREEN_CONTAINER'. This is
the possition the ALVtree will appear so align appropriately.
http://www.sapdev.co.uk/reporting/alv/alvtree/alvtree_basic.htm
see this site you wil find lots of examples
http://www.sapdev.co.uk/reporting/alv/alvtree.htm
<b>Reward if usefull</b> -
What is hierarchical data transfer in functional location
hai,
i want to know indetail about hierarchical data transfer and horizontal data transfer in functional location.
can any one help me in this regard....
plz give information with some example if you dont mind...
thanks in advance
regards
gunnu.Hi
From SAP HELP
Hierarchical Data Transfer
Definition
You can maintain data at a high level within a hierarchical object structure. The system will automatically transfer the data changes to the levels below that are affected.
The maintenance planner group is changed for the clarification plant described in Functional Location. The employee responsible for maintaining the master data makes the change to the master record of the highest functional location C1 and saves the changes. The system automatically makes the same change for all affected functional locations below the functional location C1, and issues a message to inform the employee of these changes.
Horizontal Data Transfer
Definition
With horizontal data transfer you can differentiate between:
Data transfer from reference location to functional location
Data transfer from functional location to installed piece of equipment
The ABC indicator of the functional location C1-B02-1 "Ventilator" is to be changed for several clarification plants.
The employee responsible for maintaining the master data makes the change in the master record of the reference functional location and saves the entries.
The system automatically makes the same change for all affected functional locations that were assigned to this reference location and for the pieces of equipment that are installed at these locations. The system then issues a message informing the employee of the changes.
Regards
thyagarajan -
How to model hierarchical data?
I need a way to model hierarchical data. I have tried using an object so far, and it hasn't worked. Here is the code for the class I made: http://home.iprimus.com.au/deeps/StatsGroupClass.java. As you can see, there are 4 fields: 1 to store the name of the "group", 2 integer data fields, and 1 Vector field to store all descendants. Unfortunately, this this not seem to be working as the Vector get(int index) method returns an Object. This is the error I get:
Test.java:23: cannot resolve symbol
symbol : method getGroupName ()
location: class java.lang.Object
echo("Primary Structure with index 0: " + data.get(0).getGroupName());
^
1 error I figure I can't use the approach I have been using because of this.
Can anyone help me out?Test.java:23: cannot resolve symbolsymbol : method getGroupName ()location: class java.lang.Object echo("Primary Structure with index 0: " + data.get(0).getGroupName()); ^1 errorYou need to cast the return value from get(0):
((YourFunkyClass)data.get(0)).getGroupName();Be aware that you're opening yourself up to the possibility of a runtime ClassCastException. You could consider using generics if you can guarantee that the data Vector will contain only instances of YouFunkyClass.
Hope this helps -
AdvancedDataGrid - create Array (cfquery) with children for hierarchical data set
I'm trying to create an AdvancedDataGrid with a hierarchical
data set as shown below. The problem that I am having is how to
call the data from a ColdFusion remote call and not an
ArrayCollection inside of the Flex app (as below). I'm guessing
that the problem is with the CFC that I've created which builds an
array with children. I assume that the structure of the children is
the issue. Any thoughts?
Flex App without Remoting:
http://livedocs.adobe.com/labs/flex3/html/help.html?content=advdatagrid_10.html
<?xml version="1.0"?>
<!-- dpcontrols/adg/GroupADGChartRenderer.mxml -->
<mx:Application xmlns:mx="
http://www.adobe.com/2006/mxml">
<mx:Script>
<![CDATA[
import mx.collections.ArrayCollection;
[Bindable]
private var dpHierarchy:ArrayCollection= new
ArrayCollection([
{name:"Barbara Jennings", region: "Arizona", total:70,
children:[
{detail:[{amount:5}]}]},
{name:"Dana Binn", region: "Arizona", total:130, children:[
{detail:[{amount:15}]}]},
{name:"Joe Smith", region: "California", total:229,
children:[
{detail:[{amount:26}]}]},
{name:"Alice Treu", region: "California", total:230,
children:[
{detail:[{amount:159}]}
]]>
</mx:Script>
<mx:AdvancedDataGrid id="myADG"
width="100%" height="100%"
variableRowHeight="true">
<mx:dataProvider>
<mx:HierarchicalData source="{dpHierarchy}"/>
</mx:dataProvider>
<mx:columns>
<mx:AdvancedDataGridColumn dataField="name"
headerText="Name"/>
<mx:AdvancedDataGridColumn dataField="total"
headerText="Total"/>
</mx:columns>
<mx:rendererProviders>
<mx:AdvancedDataGridRendererProvider
dataField="detail"
renderer="myComponents.ChartRenderer"
columnIndex="0"
columnSpan="0"/>
</mx:rendererProviders>
</mx:AdvancedDataGrid>
</mx:Application>
CFC - where I am trying to create an Array to send back to
the Flex App
<cfset aPackages = ArrayNew(1)>
<cfset aDetails = ArrayNew(1)>
<cfloop query="getPackages">
<cfset i = getPackages.CurrentRow>
<cfset aPackages
= StructNew()>
<cfset aPackages['name'] = name >
<cfset aPackages
['region'] = region >
<cfset aPackages['total'] = total >
<cfset aDetails
= StructNew()>
<cfset aDetails['amount'] = amount >
<cfset aPackages
['children'] = aDetails >
</cfloop>
<cfreturn aPackages>I had similar problems attempting to create an Array of
Arrays in a CFC, so I created two differents scripts - one in CF
and one in Flex - to build Hierarchical Data from a query result.
The script in CF builds an Hierarchical XML document which is then
easily accepted as HIerarchical Data in Flex. The script in Flex
loops over the query Object that is returned as an Array
Collection. It took me so long to create the XML script, and I now
regret it, since it is not easy to maintain and keep dynamic.
However, it only took me a short while to build this ActionScript
logic, which I quite like now (though it is not [
yet ] dynamic, and currently only handles two levels of
Hierarchy):
(this is the main part of my WebService result handler)....
// Create a new Array Collection to store the Hierarchical
Data from the WebService Result
var categories:ArrayCollection = new ArrayCollection();
// Create an Object variable to store the parent-level
objects
var category:Object;
// Create an Object variable to store the child-level
objects
var subCategory:Object;
// Loop through each Object in the WebService Result
for each (var object:Object in results)
// Create a new Array Collection as a copy of the Array
Collection of Hierarchical Data
var thisCategory:ArrayCollection = new
ArrayCollection(categories.toArray());
// Create a new instance of the Filter Function Class
var filterFunction:FilterFunction = new FilterFunction();
// Create Filter on the Array Collection to return only
those records with the specified Category Name
thisCategory.filterFunction =
filterFunction.NameValueFilter("NAMETXT", object["CATNAMETXT"]);
// Refresh the Array Collection to apply the Filter
thisCategory.refresh();
// If the Array Collection has records, the Category Name
exists, so use the one Object in the Collection to add Children to
if (thisCategory.length)
category = thisCategory.getItemAt(0);
// If the Array Collection has no records, the Category Name
does not exist, so create a new Object
else
// Create a new parent-level Object
category = new Object();
// Create and set the Name property of the parent-level
Object
category["NAMETXT"] = object["CATNAMETXT"];
// Create a Children property as a new Array
category["children"] = new Array();
// Add the parent-level Object to the Array Collection
categories.addItem(category);
// Create a new child-level Object as a copy of the Object
in the WebService Result
subCategory = object;
// Create and set the Name property of the child-level
Object
subCategory["NAMETXT"] = object["SUBCATNAMETXT"];
// Add the child-level Object to the Array of Children on
the parent-level Object
category["children"].push(subCategory);
// Convert the Array Collection to a Hierarchical Data
Object and use it as the Data Provider for the Advanced Data Grid
advancedDataGrid.dataProvider = new
HierarchicalData(categories); -
Simple Transformation to deserialize an XML file into ABAP data structures?
I'm attempting to write my first simple transformation to deserialize
an XML file into ABAP data structures and I have a few questions.
My simple transformation contains code like the following
<tt:transform xmlns:tt="http://www.sap.com/transformation-templates"
xmlns:pp="http://www.sap.com/abapxml/types/defined" >
<tt:type name="REPORT" line-type="?">
<tt:node name="COMPANY_ID" type="C" length="10" />
<tt:node name="JOB_ID" type="C" length="20" />
<tt:node name="TYPE_CSV" type="C" length="1" />
<tt:node name="TYPE_XLS" type="C" length="1" />
<tt:node name="TYPE_PDF" type="C" length="1" />
<tt:node name="IS_NEW" type="C" length="1" />
</tt:type>
<tt:root name="ROOT2" type="pp:REPORT" />
<QueryResponse>
<tt:loop ref="ROOT2" name="line">
<QueryResponseRow>
<CompanyID>
<tt:value ref="$line.COMPANY_ID" />
</CompanyID>
<JobID>
<tt:value ref="$line.JOB_ID" />
</JobID>
<ExportTypes>
<tt:loop>
<ExportType>
I don't know what to do here (see item 3, below)
</ExportType>
</tt:loop>
</ExportTypes>
<IsNew>
<tt:value ref="$line.IS_NEW"
map="val(' ') = xml('false'), val('X') = xml('true')" />
</IsNew>
</QueryResponseRow>
</tt:loop>
</QueryResponse>
</tt:loop>
1. In a DTD, an element can be designated as occurring zero or one
time, zero or more times, or one or more times. How do I write the
simple transformation to accommodate these possibilities?
2. In trying to accommodate the "zero or more times" case, I am trying
to use the <tt:loop> instruction. It occurs several layers deep in the
XML hierarchy, but at the top level of the ABAP table. The internal
table has a structure defined in the ABAP program, not in the data
dictionary. In the simple transformation, I used <tt:type> and
<tt:node> to define the structure of the internal table and then
tried to use <tt:loop ref="ROOT2" name="line"> around the subtree that
can occur zero or more times. But every variation I try seems to get
different errors. Can anyone supply a working example of this?
3. Among the fields in the internal table, I've defined three
one-character fields named TYPE_CSV, TYPE_XLS, and TYPE_PDF. In the
XML file, I expect zero to three elements of the form
<ExportType exporttype='csv' />
<ExportType exporttype='xls' />
<ExportType exporttype='pdf' />
I want to set field TYPE_CSV = 'X' if I find an ExportType element
with its exporttype attribute set to 'csv'. I want to set field
TYPE_XLS = 'X' if I find an ExportType element with its exporttype
attribute set to 'xls'. I want to set field TYPE_PDF = 'X' if I find
an ExportType element with its exporttype attribute set to 'pdf'. How
can I do that?
4. For an element that has a value like
<ErrorCode>123</ErrorCode>
in the simple transformation, the sequence
<ErrorCode> <tt:value ref="ROOT1.CODE" /> </ErrorCode>
seems to work just fine.
I have other situations where the XML reads
<IsNew value='true' />
I wanted to write
<IsNew>
<tt:value ref="$line.IS_NEW"
map="val(' ') = xml('false'), val('X') = xml('true')" />
</IsNew>
but I'm afraid that the <tt:value> fails to deal with the fact that in
the XML file the value is being passed as the value of an attribute
(named "value"), rather than the value of the element itself. How do
you handle this?Try this code below:
data l_xml_table2 type table of xml_line with header line.
W_filename - This is a Path.
if w_filename(02) = '
open dataset w_filename for output in binary mode.
if sy-subrc = 0.
l_xml_table2[] = l_xml_table[].
loop at l_xml_table2.
transfer l_xml_table2 to w_filename.
endloop.
endif.
close dataset w_filename.
else.
call method cl_gui_frontend_services=>gui_download
exporting
bin_filesize = l_xml_size
filename = w_filename
filetype = 'BIN'
changing
data_tab = l_xml_table
exceptions
others = 24.
if sy-subrc <> 0.
message id sy-msgid type sy-msgty number sy-msgno
with sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
endif. -
OC4J: marshalling does not recreate the same data structure onthe client
Hi guys,
I am trying to use OC4J as an EJB container and have come across the following problem, which looks like a bug.
I have a value object method that returns an instance of ArrayList with references to other value objects of the same class. The value objects have references to other value objects. When this structure is marshalled across the network, we expect it to be recreated as is but that does not happen and instead objects get duplicated.
Suppose we have 2 value objects: ValueObject1 and ValueObject2. ValueObject1 references ValueObject2 via its private field and the ValueObject2 references ValueObject1. Both value objects are returned by our method in an ArrayList structure. Here is how it will look like (number after @ represents an address in memory):
Object[0] = com.cramer.test.SomeVO@1
Object[0].getValueObject[0] = com.cramer.test.SomeVO@2
Object[1] = com.cramer.test.SomeVO@2
Object[1].getValueObject[0] = com.cramer.test.SomeVO@1
We would expect to see the same (except exact addresses) after marshalling. Here is what we get instead:
Object[0] = com.cramer.test.SomeVO@1
Object[0].getValueObject[0] = com.cramer.test.SomeVO@2
Object[1] = com.cramer.test.SomeVO@3
Object[1].getValueObject[0] = com.cramer.test.SomeVO@4
It can be seen that objects get unnecessarily duplicated â“ the instance of the ValueObject1 referenced by the ValueObject2 is not the same now as the instance that is referenced by the ArrayList instance.
This does not only break referential integrity, structure and consistency of the data but dramatically increases the amount of information sent across the network. The problem was discovered when we found that a relatively small but complicated structure that gets serialized into a 142kb file requires about 20Mb of network communication. All this extra info is duplicated object instances.
I have created a small test case to demonstrate the problem and let you reproduce it.
Here is RMITestBean.java:
package com.cramer.test;
import javax.ejb.EJBObject;
import java.util.*;
public interface RMITestBean extends EJBObject
public ArrayList getSomeData(int testSize) throws java.rmi.RemoteException;
public byte[] getSomeDataInBytes(int testSize) throws java.rmi.RemoteException;
Here is RMITestBeanBean.java:
package com.cramer.test;
import javax.ejb.SessionBean;
import javax.ejb.SessionContext;
import java.util.*;
public class RMITestBeanBean implements SessionBean
private SessionContext context;
SomeVO someVO;
public void ejbCreate()
someVO = new SomeVO(0);
public void ejbActivate()
public void ejbPassivate()
public void ejbRemove()
public void setSessionContext(SessionContext ctx)
this.context = ctx;
public byte[] getSomeDataInBytes(int testSize)
ArrayList someData = getSomeData(testSize);
try {
java.io.ByteArrayOutputStream byteOutputStream = new java.io.ByteArrayOutputStream();
java.io.ObjectOutputStream objectOutputStream = new java.io.ObjectOutputStream(byteOutputStream);
objectOutputStream.writeObject(someData);
objectOutputStream.flush();
System.out.println(" serialised output size: "+byteOutputStream.size());
byte[] bytes = byteOutputStream.toByteArray();
objectOutputStream.close();
byteOutputStream.close();
return bytes;
} catch (Exception e) {
System.out.println("Serialisation failed: "+e.getMessage());
return null;
public ArrayList getSomeData(int testSize)
// Create array of objects
ArrayList someData = new ArrayList();
for (int i=0; i<testSize; i++)
someData.add(new SomeVO(i));
// Interlink all the objects
for (int i=0; i<someData.size()-1; i++)
for (int j=i+1; j<someData.size(); j++)
((SomeVO)someData.get(i)).addValueObject((SomeVO)someData.get(j));
((SomeVO)someData.get(j)).addValueObject((SomeVO)someData.get(i));
// print out the data structure
System.out.println("Data:");
for (int i = 0; i<someData.size(); i++)
SomeVO tmp = (SomeVO)someData.get(i);
System.out.println("Object["+Integer.toString(i)+"] = "+tmp);
System.out.println("Object["+Integer.toString(i)+"]'s some number = "+tmp.getSomeNumber());
for (int j = 0; j<tmp.getValueObjectCount(); j++)
SomeVO tmp2 = tmp.getValueObject(j);
System.out.println(" getValueObject["+Integer.toString(j)+"] = "+tmp2);
System.out.println(" getValueObject["+Integer.toString(j)+"]'s some number = "+tmp2.getSomeNumber());
// Check the serialised size of the structure
try {
java.io.ByteArrayOutputStream byteOutputStream = new java.io.ByteArrayOutputStream();
java.io.ObjectOutputStream objectOutputStream = new java.io.ObjectOutputStream(byteOutputStream);
objectOutputStream.writeObject(someData);
objectOutputStream.flush();
System.out.println("Serialised output size: "+byteOutputStream.size());
objectOutputStream.close();
byteOutputStream.close();
} catch (Exception e) {
System.out.println("Serialisation failed: "+e.getMessage());
return someData;
Here is RMITestBeanHome:
package com.cramer.test;
import javax.ejb.EJBHome;
import java.rmi.RemoteException;
import javax.ejb.CreateException;
public interface RMITestBeanHome extends EJBHome
RMITestBean create() throws RemoteException, CreateException;
Here is ejb-jar.xml:
<?xml version = '1.0' encoding = 'windows-1252'?>
<!DOCTYPE ejb-jar PUBLIC "-//Sun Microsystems, Inc.//DTD Enterprise JavaBeans 2.0//EN" "http://java.sun.com/dtd/ejb-jar_2_0.dtd">
<ejb-jar>
<enterprise-beans>
<session>
<description>Session Bean ( Stateful )</description>
<display-name>RMITestBean</display-name>
<ejb-name>RMITestBean</ejb-name>
<home>com.cramer.test.RMITestBeanHome</home>
<remote>com.cramer.test.RMITestBean</remote>
<ejb-class>com.cramer.test.RMITestBeanBean</ejb-class>
<session-type>Stateful</session-type>
<transaction-type>Container</transaction-type>
</session>
</enterprise-beans>
</ejb-jar>
And finally the application that tests the bean:
package com.cramer.test;
import java.util.*;
import javax.rmi.*;
import javax.naming.*;
public class RMITestApplication
final static boolean HARDCODE_SERIALISATION = false;
final static int TEST_SIZE = 2;
public static void main(String[] args)
Hashtable props = new Hashtable();
props.put(Context.INITIAL_CONTEXT_FACTORY, "com.evermind.server.rmi.RMIInitialContextFactory");
props.put(Context.PROVIDER_URL, "ormi://lil8m:23792/alexei");
props.put(Context.SECURITY_PRINCIPAL, "admin");
props.put(Context.SECURITY_CREDENTIALS, "admin");
try {
// Get the JNDI initial context
InitialContext ctx = new InitialContext(props);
NamingEnumeration list = ctx.list("comp/env/ejb");
// Get a reference to the Home Object which we use to create the EJB Object
Object objJNDI = ctx.lookup("comp/env/ejb/RMITestBean");
// Now cast it to an InventoryHome object
RMITestBeanHome testBeanHome = (RMITestBeanHome)PortableRemoteObject.narrow(objJNDI,RMITestBeanHome.class);
// Create the Inventory remote interface
RMITestBean testBean = testBeanHome.create();
ArrayList someData = null;
if (!HARDCODE_SERIALISATION)
// ############################### Alternative 1 ##############################
// ## This relies on marshalling serialisation ##
someData = testBean.getSomeData(TEST_SIZE);
// ############################ End of Alternative 1 ##########################
} else
// ############################### Alternative 2 ##############################
// ## This gets a serialised byte stream and de-serialises it ##
byte[] bytes = testBean.getSomeDataInBytes(TEST_SIZE);
try {
java.io.ByteArrayInputStream byteInputStream = new java.io.ByteArrayInputStream(bytes);
java.io.ObjectInputStream objectInputStream = new java.io.ObjectInputStream(byteInputStream);
someData = (ArrayList)objectInputStream.readObject();
objectInputStream.close();
byteInputStream.close();
} catch (Exception e) {
System.out.println("Serialisation failed: "+e.getMessage());
// ############################ End of Alternative 2 ##########################
// Print out the data structure
System.out.println("Data:");
for (int i = 0; i<someData.size(); i++)
SomeVO tmp = (SomeVO)someData.get(i);
System.out.println("Object["+Integer.toString(i)+"] = "+tmp);
System.out.println("Object["+Integer.toString(i)+"]'s some number = "+tmp.getSomeNumber());
for (int j = 0; j<tmp.getValueObjectCount(); j++)
SomeVO tmp2 = tmp.getValueObject(j);
System.out.println(" getValueObject["+Integer.toString(j)+"] = "+tmp2);
System.out.println(" getValueObject["+Integer.toString(j)+"]'s some number = "+tmp2.getSomeNumber());
// Print out the size of the serialised structure
try {
java.io.ByteArrayOutputStream byteOutputStream = new java.io.ByteArrayOutputStream();
java.io.ObjectOutputStream objectOutputStream = new java.io.ObjectOutputStream(byteOutputStream);
objectOutputStream.writeObject(someData);
objectOutputStream.flush();
System.out.println("Serialised output size: "+byteOutputStream.size());
objectOutputStream.close();
byteOutputStream.close();
} catch (Exception e) {
System.out.println("Serialisation failed: "+e.getMessage());
catch(Exception ex){
ex.printStackTrace(System.out);
The parameters you might be interested in playing with are HARDCODE_SERIALISATION and TEST_SIZE defined at the beginning of RMITestApplication.java. The HARDCODE_SERIALISATION is a flag that specifies whether Java serialisation should be used to pass the data across or we should rely on OC4J marshalling. TEST_SIZE defines the size of the object graph and the ArrayList structure. The bigger this size is the more dramatic effect you get from data duplication.
The test case outputs the structure both on the server and on the client and prints out the size of the serialised structure. That gives us sufficient comparison, as both structure and its size should be the same on the client and on the server.
The test case also demonstrates that the problem is specific to OC4J. The standard Java serialisation does not suffer the same flaw. However using the standard serialisation the way I did in the test case code is generally unacceptable as it breaks the transparency benefit and complicates interfaces.
To run the test case:
1) Modify provider URL parameter value on line 15 of the RMITestApplication.java for your environment.
2) Deploy the bean to the server.
4) Run RMITestApplication on a client PC.
5) Compare the outputs on the server and on the client.
I hope someone can reproduce the problem and give their opinion, and possibly point to the solution if there is one at the moment.
Cheers,
AlexeiHi,
Eugene, wrong end user recovery. Alexey is referring to client desktop end user recovery which is entirely different.
Alexy - As noted in the previous post:
http://social.technet.microsoft.com/Forums/en-US/bc67c597-4379-4a8d-a5e0-cd4b26c85d91/dpm-2012-still-requires-put-end-users-into-local-admin-groups-for-the-purpose-of-end-user-data?forum=dataprotectionmanager
Each recovery point has users permisions tied to it, so it's not possible to retroacively give the users permissions. Implement the below and going forward all users can restore their own files.
This is a hands off solution to allow all users that use a machine to be able to restore their own files.
1) Make these two cmd files and save them in c:\temp
2) Using windows scheduler – schedule addperms.cmd to run daily – any new users that log onto the machine will automatically be able to restore their own files.
<addperms.cmd>
Cmd.exe /v /c c:\temp\addreg.cmd
<addreg.cmd>
set users=
echo Windows Registry Editor Version 5.00>c:\temp\perms.reg
echo [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft Data Protection Manager\Agent\ClientProtection]>>c:\temp\perms.reg
FOR /F "Tokens=*" %%n IN ('dir c:\users\*. /b') do set users=!users!%Userdomain%\\%%n,
echo "ClientOwners"=^"%users%%Userdomain%\\bogususer^">>c:\temp\perms.reg
REG IMPORT c:\temp\perms.reg
Del c:\temp\perms.reg
Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread. Regards, Mike J. [MSFT]
This posting is provided "AS IS" with no warranties, and confers no rights. -
Error saving data structure CE11000 (please read log) message number KX 655
while activating the data structure in the operating concern of CO PA sap gives the following errors.
1.Error saving data structure CE11000 (please read log)
Message no. KX655
2.Error saving table CE01000
Message no. KX593
3.in Log Reference field CE31000-REC_WAERS for CE31000-VVQ10001 has incorrect type.
Pls suggestHey,
Below tables are related to application logs
BAL_AMODAL : Application Log: INDX table for amodal communication
BALC : Application Log: Log or message context
BALDAT : Application Log: Log data
BALHANDLE : Application Log: Lock object dummy table
BALHDR : Application log: log header
BALHDRP : Application log: log parameter
BAL_INDX : Application Log: INDX tables
BALM : Application log: log message
BALMP : Application log: message parameter
BALOBJ : Application log: objects
BALOBJT : Application log: object texts
BALSUB : Application log: sub-objects
BALSUBT : Application log: Sub-object texts
-Kiran
*Please mark useful answers -
Can I automate the creation of a cluster in LabView using the data structure created in an auto generated .CSV, C header, or XML file? I'm trying to take the data structure defined in one or more of those files listed and have LabView automatically create a cluster with identical structure and data types. (Ideally, I would like to do this with a C header file only.) Basically, I'm trying to avoid having to create the cluster by hand, as the number of cluster elements could be very large. I've looked into EasyXML and contacted the rep for the add-on. Unfortunately, this capability has not been created yet. Has anyone done something like this before? Thanks in advance for the help.
Message Edited by PhilipJoeP on 04-29-2009 04:54 PM
Solved!
Go to Solution.smercurio_fc wrote:
Is this something you're trying to do at runtime? Clusters are fixed data structures so you can't change them programmatically. Or, are you just trying to create some typedef cluster controls so that you can use them for coding? What would your clusters basically look like? Perhaps another way of holding the information like an array of variants?
You can try LabVIEW scripting, though be aware that this is not supported by NI.
Wow! Thanks for the quick response! We would use this cluster as a fixed data structure. No need to change the structure during runtime. The cluster would be a cluster of clusters with multiple levels. There would be not pattern as to how deep these levels would go, or how many elements would be in each. Here is the application. I would like to be able to autocode a Simulink model file into a DLL. The model DLL would accept a Simulink bus object of a certain data structure (bus of buses), pick out which elements of the bus is needed for the model calculation, and then pass the bus object. I then will take the DLL file and use the DLL VI block to pass a cluster into the DLL block (with identical structure as the bus in Simulink). To save time, I would like to auto generate the C header file using Simulink to define the bus structure and then have LabView read that header file and create the cluster automatically. Right now I can do everything but the auto creation of the cluster. I can manually build the cluster to match the Simulink model bus structure and it runs fine. But this is only for an example model with a small structure. Need to make the cluster creation automated so it can handle large structures with minimal brute force. Thanks! -
What is the best data structure for loading an enterprise Power BI site?
Hi folks, I'd sure appreciate some help here!
I'm a kinda old-fashioned gal and a bit of a traditionalist, building enterprise data warehouses out of Analysis Service hypercubes with a whole raft of MDX for analytics. Those puppies would sit up and beg when you asked them to deliver up goodies
to SSRS or PowerView.
But Power BI is a whole new game for me.
Should I be exposing each dimension and fact table in the relational data warehouse as a single Odata feed?
Should I be running Data Management Gateway and exposing each table in my RDW individually?
Should I be flattening my stars and snowflakes and creating a very wide First Normal Form dataset with everything relating to each fact?
I guess my real question, folks, is what's the optimum way of exposing data to the Power BI cloud?
And my subsidiary question is this: am I right in saying that all the data management, validation, cleansing, and regular ETTL processes are still required
before the data is suitable to expose to Power BI?
Or, to put it another way, is it not the case that you need to have a clean and properly structured data warehouse
before the data is ready to be massaged and presented by Power BI?
I'd sure value your thoughts and opinions,
Cheers, Donna
Donna KellyDear All,
My original question was:
what's the optimum way of exposing data to the Power BI cloud?
Having spent the last month faffing about with Power BI – and reading about many people’s experiences using it – I think I can offer a few preliminary conclusions.
Before I do that, though, let me summarise a few points:
Melissa said “My initial thoughts: I would expose each dim & fact as a separate OData feed” and went on to say “one of the hardest things . . . is
the data modeling piece . . . I think we should try to expose the data in a way that'll help usability . . . which wouldn't be a wide, flat table ”.
Greg said “data modeling is not a good thing to expose end users to . . . we've had better luck with is building out the data model, and teaching the users
how to combine pre-built elements”
I had commented “. . . end users and data modelling don't mix . . . self-service so
far has been mostly a bust”.
Here at Redwing, we give out a short White Paper on Business Intelligence Reporting. It goes to clients and anyone else who wants one. The heart
of the Paper is the Reporting Pyramid, which states: Business intelligence is all about the creation and delivery of actionable intelligence to the right audience at the right time
For most of the audience, that means Corporate BI: pre-built reports delivered on a schedule.
For most of the remaining audience, that means parameterised, drillable, and sliceable reporting available via the web, running the gamut from the dashboard to the details, available on
demand.
For the relatively few business analysts, that means the ability for business users to create their own semi-customised visual reports when required, to serve
their audiences.
For the very few high-power users, that means the ability to interrogate the data warehouse directly, extract the required data, and construct data mining models, spreadsheets and other
intricate analyses as needed.
On the subject of self-service, the Redwing view says: Although many vendors want tot sell self-service reporting tools to the enterprise, the facts of the matter are these:
v
80%+ of all enterprise reporting requirement is satisfied by corporate BI . . . if it’s done right.
v Very few staff members have the time, skills, or inclination to learn and employ self-service business intelligence in the course of their activities.
I cannot just expose raw data and tell everyone to get on with it. That way lies madness!
I think that clean and well-structured data is a prerequisite for delivering business intelligence.
Assuming that data is properly integrated, historically accurate and non-volatile as well, then I've just described
a data warehouse, which is the physical expression of the dimensional model.
Therefore, exposing the presentation layer of the data warehouse is – in my opinion – the appropriate interface for self-service business intelligence.
Of course, we can choose to expose perspectives as well, which is functionally identical to building and exposing subject data marts.
That way, all calculations, KPIs, definitions, and even field names, and all consistent because they all come from the single source of the truth, and not from spreadmart hell.
So my conclusion is that exposing the presentation layer of the properly modelled data warehouse is – in general - the way to expose data for self-service.
That’s fine for the general case, but what about Power BI? Well, it’s important to distinguish between new capabilities in Excel, and the ones in Office 365.
I think that to all intents and purposes, we’re talking about exposing data through the Data Management Gateway and reading it via Power Query.
The question boils down to what data structures should go down that pipe.
According to
Create a Data Source and Enable OData Feed in Power BI Admin Center, the possibilities are tables and views. I guess I could have repeating data in there, so it could be a flattened structure of the kind Melissa doesn’t like (and neither do I).
I could expose all the dims and all the facts . . . but that would mean essentially re-building the DW in the PowerPivot DM, and that would be just plain stoopid. I mean, not a toy system, but a real one with scores of facts and maybe hundreds of dimensions?
Fact is, I cannot for the life of me see what advantages DMG/PQ
has over just telling corporate users to go directly to the Cube Perspective they want, that has already all the right calcs, KPIs, security, analytics, field names . . . and most importantly, is already modelled correctly!
If I’m a real Power User, then I can use PQ on my desktop to pull mashup data from the world, along with all my on-prem data through my exposed Cube presentation layer, and PowerPivot the
heck out of that to produce all the reporting I’d ever want. It'd be a zillion times faster reading the data directly from the Cube instead of via the DMG, as well (I think Power BI performance sucks, actually).
Of course, your enterprise might not
have a DW, just a heterogeneous mass of dirty unstructured data. If that’s the case,
choosing Power BI data structures is the least of your problems! :-)
Cheers, Donna
Donna Kelly
Maybe you are looking for
-
What's wrong with my phone? It's getting hot when I use or charge it
Every time I charge or use my phone, it gets extremely hot. Also, the battery life goes down real fast while I'm using it
-
Transferring footage from a DVD to CS4Pro
About a year ago I was asking what item to use/purchase to transfer video from VHS into my computer so that I could capture it into my CS4pro. This year I snuck an old '86 DVD from my brother's wedding album and now I want to edit this video so that
-
Mail not going thru SO_DOCUMENT_SEND_API1
Hi All, I need to send a mail to external mail ids so im using SO_DOCUMENT_SEND_API1 FM . Message sent successfully to SAP outbox not receipinent ids' Can you please guide anything im missed out parameters. DATA: WA_DOCDATA LIKE SODOCCHGI1, L_
-
My system keeps thinking that the default mail application is Gmail (I have google notifier installed) when I have my preferences set to Mail.app. Mail.app's preferences say that it's Mail.app and Google Notifier says that the default mail composer i
-
I've been using CS3 for years and would like to upgrade to CS6. Can't find it on this very difficult to navigate site anywhere. If the only option is the monthly cloud, I will not be using Adobe software anymore. I have no idea where this topic is go