Slightly OT: Master File Table Structure....
Dear all,
I'm trying to find out the exact layout of an MFT record.
I understand the record structure:
| Standard information | Name | Data | Empty |
However, I would like to know what every single byte in the record does..... Any links would be much appreciated??
Cheers,
Ben
OT ... Linux ... this is a Java Programming forum. So
you better find more relevant forums on the net.
See
http://groups.google.com/
MFT (Master File Table) is part of NTFS, which is the file system found in the Windows NT family. If you are going to criticise my decision to post, please research your argument before hand!!
It may not be clear to you, that in order to develop applications it is necessary to first understand the area in which you are developing for!! It is impossible to develop a Java application which relates to the MFT if you do not first undertand its structure byte-by-byte!!
You will also note the subject heading "Slightly OT"..... this is because it is not purely a Java problem! It is a research problem, which leads to a Java problem.
Regards,
Ben
Similar Messages
-
Corrupted Master File Table on External Hard-Drive(NTFS)
You could try some data recover software, it should be able to copy the data off for you (as long as there are no physical problems with the HD). Recuva is free, Rstudio is also very good. You will need to have enough space available on another drive to restore the data to.
Best of luck with it! :)First off I want to apologize if I sound inexperienced, this is my first time posting on Spiceworks and I have only been in IT for a year as an intern.So here is my problem, I have an external hard-drive that I use as a secondary drive for storage. The other day I went to get something off it when I noticed that I could not access the drive through file explorer. It came up with an error saying "Data error (cycle redundancy check).".
I proceeded to google the problem since I have not seen the error before and found many forums telling me to run a check disk on the drive with /f to fix the errors on the disk. I run the chkdsk and I get the following error"Corrupt master file table. Windows will attempt to recover master file table from disk. Windows cannot recover master file table. CHKDSK aborted".
I then booted my machine into a...
This topic first appeared in the Spiceworks Community -
Accessing NTFS Master File Table
I am trying to read the NTFS Master File Table. Does anybody know how to access the NTFS Master File Table using Java?
Thank you!bschormann wrote:
Thanks anyhow. I am using Java on Windows platform and have searched everywhere.Yes, you would be using Windows - NTFS is a Windows fs.
You should find the information to do what you want at www.ntfs,com. Although, I doubt that Java can access the data. Some searching (yes, I see you already searched everywhere) may turn up some prewritten Windows code. -
Insert multiple text files to multiple tables that have different table structures
Hi All,
I have a small problem. I have lots of text files in a folder location. Say for an example Company.txt, Code.txt. I need to insert all these files to tables similar to the file structure. Company file to Company table and Code file to Code table. The catch
is that all these table structures differ one to another.
How can i do this using SSIS? I guess using a for each loop with some data flow task would be a start.
Can some body give me a step by step example on how i can achieve this.
Thanks
LMIt is very complicated to accomplish the above requirement using the standard Data Flow Task. You have to essentially setup a separate task for each table layout. If you can use third-party solutions, check the commercial CozyRoc
Data Flow Task Plus. It is an extension of the standard Data Flow Task, with ability to do dynamic data flows at runtime. You can load all your tables and layouts with only one Data Flow Task. The other benefit is the solution doesn't require programming.SSIS Tasks Components Scripts Services | http://www.cozyroc.com/ -
RFEBKA00 multiple file postings - modifying TVARVC table structure
All
I have requirement to write a bespoke program that will pick up multiple MT940 files from a directory AL11 and submit them to the report RFEBKA00.
I am doing this by putting the file path and file name into a TVARVC variable and using this in the RFEBKA00 variant.
However the TVARVC-LOW can only handle 45 characters where my file path including the name is more like 70-80 characters long so the TVARVC variable is being truncated to 45 characters.
I do not really want to bespoke the receiving program (RFEBKA00) to handle multiple TVARVC entries and concatenate the strings together. The only other option I have thought about is modifying the TVARVC structure.
My question is:
What would be the implication of modifying the TVARVC table structure to increase the field lengths of the low/high to say 100 characters to handle the filepaths needed in this case?
Thanks in advance
DavidYou should not modify the standard SAP DDIC objects.
Also, the table will be used in so many reports/programs,etc. and it may happen these objects iwll not work after your changes.
the first approach is quite safe though or you may create a new custom table -
Creating a table structure using xsd file or excel?
Hi,
How to create a table structure of the xsd file or excel file generated from access 2000?
Do you have any ideas or do you know any useful tools for this?Yes. This possible with ADF Faces RC, which provides an af:tree component. Have a look at the Fusion Order Demo, which uses the tree to implement a similar usecase to the one you describe.
http://www.oracle.com/technology/products/jdev/samples/fod/index.html
Regards,
RiC -
Table Structure for master entries
I have nearly 30 Master tables item, For easy management i created tables are like as follows
Table for the master table items
MASTER_TYPE
TYPE_1D NUMBER PK
TYPE_DESC Varchar(140)
Table for the master entries
MASTER_TABLE
ID NUMBER PK
TYPE_1D NUMBER FK to MASTER_TYPE
DESC Varchar(240)
Is there any technical issues for creating master table structure like this.
Please advice me ..Hi Fabienne,
See the below blog, this will explain to find the table and field name for a Screen field.
/people/community.user/blog/2006/12/29/useful-tips-to-find-out-the-table-for-screen-field
Please close the thread if this solves your problem.
Regards
Sudheer -
Table structure and constraints in HTML table
This script creates a html file (Structure.html) that contains structure of a specific table.
When the following script is executed in sql * plus, it asks for the table name for which
structure information is needed. after entering the table name, it writes the table structure
into structure.html file.
SET LINESIZE 150
SET PAGESIZE 150
SET FEEDBACK OFF
SET VERIFY OFF
COLUMN "COLUMN NAME" FORMAT A50
COLUMN "DATA TYPE" FORMAT A15
COLUMN "IS NULL" FORMAT A15
COLUMN CONSTRAINTS FORMAT A15
PROMPT Enter table name:
ACCEPT TABNAME
SET MARK HTML ON
SPOOL STRUCTURE.html
PROMPT &TABNAME
-- Query ---
SELECT TRIM(A.COLUMN_NAME) AS "COLUMN NAME",
TRIM(DATA_TYPE||'('||DECODE(A.DATA_LENGTH,22,A.DATA_PRECISION||','||A.DATA_SCALE,
A.DATA_LENGTH) || ')') AS "DATA TYPE",
TRIM(DECODE(A.NULLABLE,'Y',' ','NOT NULL')) AS "IS NULL",
TRIM(DECODE(C.CONSTRAINT_TYPE,'P','PRIMARY KEY','R','FOREIGN KEY('||D.TABLE_NAME||')','U','UNIQUE', 'C','CHECK')) AS CONSTRAINTS,
TRIM(C.CONSTRAINT_NAME) AS "CONSTRAINT NAME",
C.SEARCH_CONDITION AS "CHECK CONDITION",
A.DATA_DEFAULT AS "DEFAULT VALUE"
FROM USER_TAB_COLS A,
USER_CONS_COLUMNS B,
USER_CONSTRAINTS C,
USER_CONS_COLUMNS D
WHERE
A.TABLE_NAME = '&TABNAME' AND
A.TABLE_NAME = B.TABLE_NAME(+) AND
A.COLUMN_NAME = B.COLUMN_NAME(+) AND
B.CONSTRAINT_NAME = C.CONSTRAINT_NAME(+) AND
C.R_CONSTRAINT_NAME = D.CONSTRAINT_NAME(+);
SPOOL OFF
SET MARK HTML OFFHi,
For Head Count you can use 0HR_PA_0 datasource and the other Employee details like start date and end date you can get them from employee master data and FTE can be calculated from the Emloyee Master Data and Head count data.
Hope this helps...
Thanks, -
Export table structure in datapump
Hi,
Oracle Version : 10.2.0.1 and 11.2.0.1
Operating system:Linux
I need an help regarding to export only the table structure in 10g.
In 11g when i tried to export table structure for the tables which starts with ST_LO_% it works fine and here is the output for that export statement.
[oracle@vtlsys2-209 dbdump]$ expdp CNGSTORES_TEST_DEC1610/CNGSTORES_TEST_DEC1610 directory=dbdump dumpfile=chala_feb2111.dmp logfile=chala_feb2111.log tables="ST_IL_%","ST_LO_%","SCM%","ARCH%" exclude=statistics,grants job_name=tablesfil parallel=4 version=10.2 content=metadata_only
Export: Release 11.2.0.1.0 - Production on Mon Feb 21 14:34:16 2011
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
UDE-01017: operation generated ORACLE error 1017
ORA-01017: invalid username/password; logon denied
Username: cngstores_test_dec1610/cngstores_test_dec1610
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "CNGSTORES_TEST_DEC1610"."TABLESFIL": cngstores_test_dec1610/******** directory=dbdump dumpfile=chala_feb2111.dmp logfile=chala_feb2111.log tables=ST_IL_%,ST_LO_%,SCM%,ARCH% exclude=statistics,grants job_name=tablesfil parallel=4 version=10.2 content=metadata_only
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/COMMENT
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/FUNCTIONAL_AND_BITMAP/INDEX
Master table "CNGSTORES_TEST_DEC1610"."TABLESFIL" successfully loaded/unloaded
Dump file set for CNGSTORES_TEST_DEC1610.TABLESFIL is:
/u05/dbdump/chala_feb2111.dmp
Job "CNGSTORES_TEST_DEC1610"."TABLESFIL" successfully completed at 14:34:37But when i tried in 10g it is throwing errors and below is the export script.
[oracle@VTL1253AD dbdump]$ expdp aa_test/aa_test directory=dbdump dumpfile=st_LO_IL_EMPTYTABLES.dmp logfile=st_LO_IL_EMPTYTABLES.log content=metadata_only job_name=aa_empty tables="ST_LO_%","ST_IL_%"
Export: Release 10.2.0.1.0 - Production on Monday, 21 February, 2011 14:03:18
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Starting "AA_TEST"."AA_EMPTY": aa_test/******** directory=dbdump dumpfile=st_LO_IL_EMPTYTABLES.dmp logfile=st_LO_IL_EMPTYTABLES.log content=metadata_only job_name=aa_empty tables=ST_LO_%,ST_IL_%
ORA-39166: Object ST_IL_% was not found.
ORA-39166: Object ST_LO_% was not found.
ORA-31655: no data or metadata objects selected for job
Job "AA_TEST"."AA_EMPTY" completed with 3 error(s) at 14:03:24Can any one please help me how to export table structure that are starting with ST_LO_% and ST_IL_%.
Please help me.
Thanks & Regards,
Poorna Prasad.SHi N Gasparotto ,
Thanks for your quick replay .
Here when i use to export tables ST_LO% in the include parameter it is working fine .
[oracle@VTL1253AD ~]$ expdp aa_test/aa_test directory=dbdump dumpfile=aa_test_feb2111_st_loil.dmp logfile=aa_test_feb2111_st_LO.log parallel=4 job_name=aa_test1 content=metadata_only include=table:\"like \'ST_LO_%\'\"
Export: Release 10.2.0.1.0 - Production on Monday, 21 February, 2011 15:10:30
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Starting "AA_TEST"."AA_TEST1": aa_test/******** directory=dbdump dumpfile=aa_test_feb2111_st_loil.dmp logfile=aa_test_feb2111_st_LO.log parallel=4 job_name=aa_test1 content=metadata_only include=table:"like 'ST_LO_%'"
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/FUNCTIONAL_AND_BITMAP/INDEX
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/FUNCTIONAL_AND_BITMAP/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Master table "AA_TEST"."AA_TEST1" successfully loaded/unloaded
Dump file set for AA_TEST.AA_TEST1 is:
/u04/dbdump/aa_test_feb2111_st_loil.dmp
Job "AA_TEST"."AA_TEST1" successfully completed at 15:10:39but when i tried to export tables that start ST_LO_% and ST_IL_% the export is failing.
[oracle@VTL1253AD ~]$ expdp aa_test/aa_test directory=dbdump dumpfile=aa_test_feb2111_st_loil.dmp logfile=aa_test_feb2111_st_LO.log parallel=4 job_name=aa_test1 content=metadata_only include=table:\"like \'ST_LO_%\'\",table:\"like \'ST_IL_%\'\"
Export: Release 10.2.0.1.0 - Production on Monday, 21 February, 2011 15:07:35
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Starting "AA_TEST"."AA_TEST1": aa_test/******** directory=dbdump dumpfile=aa_test_feb2111_st_loil.dmp logfile=aa_test_feb2111_st_LO.log parallel=4 job_name=aa_test1 content=metadata_only include=table:"like 'ST_LO_%'",table:"like 'ST_IL_%'"
ORA-39168: Object path TABLE was not found.
ORA-31655: no data or metadata objects selected for job
Job "AA_TEST"."AA_TEST1" completed with 2 error(s) at 15:07:41
[oracle@VTL1253AD ~]$ expdp aa_test/aa_test directory=dbdump dumpfile=aa_test_feb2111_st_loil.dmp logfile=aa_test_feb2111_st_LO.log parallel=4 job_name=aa_test1 content=metadata_only include=table:\"like \'ST_LO_%\',\'ST_IL_%\'\"
Export: Release 10.2.0.1.0 - Production on Monday, 21 February, 2011 15:08:56
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
ORA-39001: invalid argument value
ORA-39071: Value for INCLUDE is badly formed.
ORA-00933: SQL command not properly endedCan you please help me what was the mistake i am doing in my export script when trying to export ST_LO_%,ST_IL_%.
Regards,
Poorna Prasad.S -
BI 7 : Command to export a table structure of SAP R/3 into a script/text ?
Hi All.
Greetings.
Am New to SAP R/3 system. And request help.
We are trying to pull data from SAP R/3 thro Bussiness Objects Data Services into Oracle.
For now : we create a target oracle table looking at the table structure of SAP R/3 from SE 11.
In BODS, We then do the query transformation, and use the oracle target table created by us manually.
This works absolutely fine.
We would like to know the command by which we could export the table structure of any existing table
in SAP R/3 into a script / or to text file,
which we could use to create the same table structure in oracle.
Rather than manually typing some 200 field names for each tables.
Can anyone advise on this.
Thanks
InduHello,
The problem is caused due to the spaces in your directories
C:\SAP Dumps\Core Release SR1 Export_CD1_51019634/DB/ADA/DBSIZE.XML
Replace the spaces with underscores and restart the installation from from scratch.
Cheers
Bert -
Help - Ext HDD Referenced Master Files Gets Connected to Wrong Drive
I just bought Aperture and upgraded to 1.5 and am a new user. Prior to the external/open library, I would not have bought Aperture because I need to access my images on both Macs and PCs.
I spent all day yesterday creating project structure, albums and importing images into Aperture. This morning when I launched Aperture, the referenced masters on external HD were somehow now connected to my Boot Camp WindowsXP partition. When this occurred, I did not have my external drive connected.
Naturally, I connected the external drive and relaunched Aperture but all the images were referenced to my XP partition with directory path that belongs to my external HD (i.e., path is XP/Photo Lib/2006/..., instead of what it should be LACIE/photo Lib/2006/...) I was able to reconnect by selecting one image and reconnecting all function. Easy enough but troublesome.
Has anyone else experienced something like this? Is this a bug on 1.5? Am I doing something wrong - I could not find any setting that would cause this behavior.
I am using MBP so this is bit of a big deal for me....
Thanks in advance.
Mac OS X (10.4.7) XP ProMy comment was too generalized - sorry.
Actually, the version file I was referring to is a version that gets created when an external editor is used. As you know, Aperture can create either TIF or PDS file to be used by an external editor. Once it is saved in the external editor, the preview is updated and the version is stored in Aperture Library. Unless I save the file using "Save As" into a different location manually, the edited file is in Aperture Lib. I will have to export the file into the directory of choice.
If Aperture saved the external edited file into where the master file is located, then Photoshop'd or any other editor modified files can be accessed directly without having to go through the extra steps -
Sample Applescript: scraping values from numbers files into a master file
Hi, I have programming experience in c and other languages, but am new to applescript and so am learning a lot from this forum.
My goal is to make a timesheet system for my Dad (for a bday present) where every time he helps a client, he fills out a newly created numbers file - and after a week or so, he can run a script that scrapes certain values from each numbers file and places it into a master numbers file. Then saving and closing the file.
Vince, it sounds like you've written a script that does this feature of looping through all numbers files in a folder and putting select values from each numbers file into a master numbers file (after clearing the previous values of the master file).
Specifically, I'm looking for a sample script that opens up a numbers file, clears its table, then fills this table by scraping one value from a particular cell in every numbers file in a folder.
If anyone has a similar script they would be willing to post or email to me, for me to use as a foundation and to learn from, I would be very very very grateful. My email is forman.jq at gmail dot com.I guess that this script may be a good starting point.
--[SCRIPT fromfolder_2spreadsheet1]
The target spreadsheet must be open at front and must contain the sheet sheet_destination which much contain the table table_destination.
Choose the folder supposed to store the source sopreadsheets.
Yvan KOENIG (VALLAURIS, France)
2010/08/18
--=====
(* Edit these height properties to fit your needs *)
property destination : "destinationDoc.numbers"
property sheet_destination : "destination"
property table_destination : "insert here"
property premierelignedestination : 2
property colonne_destination : 2
property ledossierhabituel : "Macintosh HD Maxtor:Users:yvan_koenig:Desktop:dossier habituel:"
property ligne_source : 2
property colonne_source : 2
--=====
on run
my activateGUIscripting()
Select the folder storing the spreadsheets from which we will extract values *)
set dossier_source to choose folder with prompt "Choose folder storing the Numbers documents…" default location (ledossierhabituel as alias)
Build a list of disk items available in the selected folder *)
tell application "System Events"
set les_elements to every disk item of folder (dossier_source as text) --whose (get type identifier) is in
set les_tableurs to {}
Extracts the list of the Numbers spreadsheets available in the selected folder *)
repeat with refsurelement in les_elements
if type identifier of refsurelement is in {"com.apple.iwork.numbers.numbers", "com.apple.iwork.numbers.sffnumbers"} then
copy path of refsurelement to end of les_tableurs
end if
end repeat
end tell -- System Events
if les_tableurs is {} then
No Numbers documents available so we stop the process. *)
set rapport to "The folder “" & dossier_source & "” doesn’t contain Numbers documents !"
else
set rapport to {}
end if
Check that the target Numbers document is open at front
and that it embed the defined sheet embedding the defined table. *)
tell application "Numbers"
activate
set existants to name of documents
if destination is not in existants then
copy "The document " & destination & " is not open !" to end of rapport
else
tell document destination
if sheet_destination is not in (name of sheets) then
copy "the sheet " & sheet_destination & " is unavailable in the document " & destination & " !" to end of rapport
else
tell sheet sheet_destination
if table_destination is not in (name of tables) then copy "The table " & table_destination & " is unavailable in the sheet " & sheet_destination & " of the document " & destination & " !" to end of rapport
end tell -- sheetSource
end if
end tell --document destination
end if
If target document is not at front or if it doesn't match the defined requirements,
we quit the process. *)
if rapport is not {} then error my recolle(rapport, return)
Clean the target table, minus row 1 supposed to be storing columns headers *)
tell document destination to tell sheet sheet_destination to tell table table_destination
set selection range to range ("A2 : " & name of last cell)
end tell --document destination
end tell -- Numbers
my selectMenu("Numbers", 4, 9) (* Suppress *)
set liste_valeurs to {}
tell application "Numbers"
repeat with un_tableur in les_tableurs
Open the spreadsheets and extract from each of them the wanted value *)
open un_tableur
tell document 1 to tell sheet 1 to tell table 1
set une_valeur to value of cell 2 of column 2
end tell
if une_valeur is 0.0 then
copy "empty" to end of liste_valeurs
else
copy une_valeur as text to end of liste_valeurs
end if
close document 1
end repeat
Now, it's time to insert the values in the target table *)
set ligne_destination to premierelignedestination
tell document destination to tell sheet sheet_destination to tell table table_destination
repeat with une_valeur in liste_valeurs
if not (exists row ligne_destination) then add row below last row
if une_valeur is not "empty" then
set value of cell ligne_destination of column colonne_destination to une_valeur
end if
set ligne_destination to ligne_destination + 1
end repeat
end tell -- document destination
save document destination
end tell -- Numbers
end run
--=====
on recolle(l, d)
local oTIDs, t
set oTIDs to AppleScript's text item delimiters
set AppleScript's text item delimiters to d
set t to l as text
set AppleScript's text item delimiters to oTIDs
return t
end recolle
--=====
on activateGUIscripting()
(* to be sure than GUI scripting will be active *)
tell application "System Events"
if not (UI elements enabled) then set (UI elements enabled) to true
end tell
end activateGUIscripting
--=====
my selectMenu("Pages",5, 12)
==== Uses GUIscripting ====
on selectMenu(theApp, mt, mi)
tell application theApp
activate
tell application "System Events" to tell process theApp to tell menu bar 1 to ¬
tell menu bar item mt to tell menu 1 to click menu item mi
end tell -- application theApp
end selectMenu
--=====
--[/SCRIPT]
I apologize, I'm too busy to write more explanations.
Yvan KOENIG (VALLAURIS, France) mercredi 18 août 2010 21:38:04 -
TIPS(18) : CREATING SCRIPTS TO RECREATE A TABLE STRUCTURE
제품 : SQL*PLUS
작성날짜 : 1996-11-12
TIPS(18) : Creating Scripts to Recreate a Table Structure
=========================================================
The script creates scripts that can be used to recreate a table structure.
For example, this script can be used when a table has become fragmented or to
get a defintion that can be run on another database.
CREATES SCRIPT TO RECREATE A TABLE-STRUCTURE
INCL. STORAGE, CONSTRAINTS, TRIGGERS ETC.
This script creates scripts to recreate a table structure.
Use the script to reorganise a table that has become fragmented,
to get a definition that can be run on another database/schema or
as a basis for altering the table structure (eg. drop a column!).
IMPORTANT: Running the script is safe as it only creates two new scripts and
does not do anything to your database! To get anything done you have to run the
scripts created.
The created scripts does the following:
1. save the content of the table
2. drop any foreign key constraints referencing the table
3. drop the table
4. creates the table with an Initial storage parameter that
will accomodate the entire content of the table. The Next
parameter is 25% of the initial.
The storage parameters are picked from the following list:
64K, 128K, 256K, 512K, multiples of 1M.
5. create table and column comments
6. fill the table with the original content
7. create all the indexes incl storage parameters as above.
8. add primary, unique key and check constraints.
9. add foreign key constraints for the table and for referencing
tables.
10.Create the table's triggers.
11.Compile any depending objects (cascading).
12.Grant table and column privileges.
13.Create synonyms.
This script must be run as the owner of the table.
If your table contains a LONG-column, use the COPY
command in SQL*Plus to store/restore the data.
USAGE
from SQL*Plus:
start reorgtb
This will create the scripts REORGS1.SQL and REORGS2.SQL
REORGS1.SQL contains code to save the current content of the table.
REORGS2.SQL contains code to rebuild the table structure.
undef tab;
set echo off
column a1 new_val stor
column b1 new_val nxt
select
decode(sign(1024-sum(bytes)/1024),-1,to_char((round(sum(bytes)/(1024*1
024))+1))||'M', /* > 1M new rounded up to nearest Megabyte */
decode(sign(512-sum(bytes)/1024), -1,'1M',
decode(sign(256-sum(bytes)/1024), -1,'512K',
decode(sign(128-sum(bytes)/1024), -1,'256K',
decode(sign(64-sum(bytes)/1024) , -1,'128K',
'64K'
a1,
decode(sign(1024-sum(bytes)/4096),-1,to_char((round(sum(bytes)/(4096*1
024))+1))||'M', /* > 1M new rounded up to nearest Megabyte */
decode(sign(512-sum(bytes)/4096), -1,'1M',
decode(sign(256-sum(bytes)/4096), -1,'512K',
decode(sign(128-sum(bytes)/4096), -1,'256K',
decode(sign(64-sum(bytes)/4096) , -1,'128K',
'64K'
b1
from user_extents
where segment_name=upper('&1');
set pages 0 feed off verify off lines 150
col c1 format a80
spool reorgs1.sql
PROMPT drop table bk_&1
prompt /
PROMPT create table bk_&1 storage (initial &stor) as select * from &1
prompt /
spool off
spool reorgs2.sql
PROMPT spool reorgs2
select 'alter table '||table_name||' drop constraint
'||constraint_name||';'
from user_constraints where r_constraint_name
in (select constraint_name from user_constraints where
table_name=upper('&1')
and constraint_type in ('P','U'));
PROMPT drop table &1
prompt /
prompt create table &1
select decode(column_id,1,'(',',')
||rpad(column_name,40)
||decode(data_type,'DATE' ,'DATE '
,'LONG' ,'LONG '
,'LONG RAW','LONG RAW '
,'RAW' ,'RAW '
,'CHAR' ,'CHAR '
,'VARCHAR' ,'VARCHAR '
,'VARCHAR2','VARCHAR2 '
,'NUMBER' ,'NUMBER '
,'unknown')
||rpad(
decode(data_type,'DATE' ,null
,'LONG' ,null
,'LONG RAW',null
,'RAW' ,decode(data_length,null,null
,'('||data_length||')')
,'CHAR' ,decode(data_length,null,null
,'('||data_length||')')
,'VARCHAR' ,decode(data_length,null,null
,'('||data_length||')')
,'VARCHAR2',decode(data_length,null,null
,'('||data_length||')')
,'NUMBER' ,decode(data_precision,null,' '
,'('||data_precision||
decode(data_scale,null,null
,','||data_scale)||')')
,'unknown'),8,' ')
||decode(nullable,'Y','NULL','NOT NULL') c1
from user_tab_columns
where table_name = upper('&1')
order by column_id
prompt )
select 'pctfree '||t.pct_free c1
,'pctused '||t.pct_used c1
,'initrans '||t.ini_trans c1
,'maxtrans '||t.max_trans c1
,'tablespace '||s.tablespace_name c1
,'storage (initial '||'&stor' c1
,' next '||'&stor' c1
,' minextents '||t.min_extents c1
,' maxextents '||t.max_extents c1
,' pctincrease '||t.pct_increase||')' c1
from user_Segments s, user_tables t
where s.segment_name = upper('&1') and
t.table_name = upper('&1')
and s.segment_type = 'TABLE'
prompt /
select 'comment on table &1 is '''||comments||''';' c1 from
user_tab_comments
where table_name=upper('&1');
select 'comment on column &1..'||column_name||
' is '''||comments||''';' c1 from user_col_comments
where table_name=upper('&1');
prompt insert into &1 select * from bk_&1
prompt /
set serveroutput on
declare
cursor c1 is select index_name,decode(uniqueness,'UNIQUE','UNIQUE')
unq
from user_indexes where
table_name = upper('&1');
indname varchar2(50);
cursor c2 is select
decode(column_position,1,'(',',')||rpad(column_name,40) cl
from user_ind_columns where table_name = upper('&1') and
index_name = indname
order by column_position;
l1 varchar2(100);
l2 varchar2(100);
l3 varchar2(100);
l4 varchar2(100);
l5 varchar2(100);
l6 varchar2(100);
l7 varchar2(100);
l8 varchar2(100);
l9 varchar2(100);
begin
dbms_output.enable(100000);
for c in c1 loop
dbms_output.put_line('create '||c.unq||' index '||c.index_name||' on
&1');
indname := c.index_name;
for q in c2 loop
dbms_output.put_line(q.cl);
end loop;
dbms_output.put_line(')');
select 'pctfree '||i.pct_free ,
'initrans '||i.ini_trans ,
'maxtrans '||i.max_trans ,
'tablespace '||i.tablespace_name ,
'storage (initial '||
decode(sign(1024-sum(e.bytes)/1024),-1,
to_char((round(sum(e.bytes)/(1024*1024))+1))||'M',
decode(sign(512-sum(e.bytes)/1024), -1,'1M',
decode(sign(256-sum(e.bytes)/1024), -1,'512K',
decode(sign(128-sum(e.bytes)/1024), -1,'256K',
decode(sign(64-sum(e.bytes)/1024) , -1,'128K',
'64K'))))) ,
' next '||
decode(sign(1024-sum(e.bytes)/4096),-1,
to_char((round(sum(e.bytes)/(4096*1024))+1))||'M',
decode(sign(512-sum(e.bytes)/4096), -1,'1M',
decode(sign(256-sum(e.bytes)/4096), -1,'512K',
decode(sign(128-sum(e.bytes)/4096), -1,'256K',
decode(sign(64-sum(e.bytes)/4096) , -1,'128K',
'64K'))))) ,
' minextents '||s.min_extents ,
' maxextents '||s.max_extents ,
' pctincrease '||s.pct_increase||')'
into l1,l2,l3,l4,l5,l6,l7,l8,l9
from user_extents e,user_segments s, user_indexes i
where s.segment_name = c.index_name
and s.segment_type = 'INDEX'
and i.index_name = c.index_name
and e.segment_name=s.segment_name
group by s.min_extents,s.max_extents,s.pct_increase,
i.pct_free,i.ini_trans,i.max_trans,i.tablespace_name ;
dbms_output.put_line(l1);
dbms_output.put_line(l2);
dbms_output.put_line(l3);
dbms_output.put_line(l4);
dbms_output.put_line(l5);
dbms_output.put_line(l6);
dbms_output.put_line(l7);
dbms_output.put_line(l8);
dbms_output.put_line(l9);
dbms_output.put_line('/');
end loop;
end;
declare
cursor c1 is
select constraint_name, decode(constraint_type,'U',' UNIQUE',' PRIMARY
KEY') typ,
decode(status,'DISABLED','DISABLE',' ') status from user_constraints
where table_name = upper('&1')
and constraint_type in ('U','P');
cname varchar2(100);
cursor c2 is
select decode(position,1,'(',',')||rpad(column_name,40) coln
from user_cons_columns
where table_name = upper('&1')
and constraint_name = cname
order by position;
begin
for q1 in c1 loop
cname := q1.constraint_name;
dbms_output.put_line('alter table &1');
dbms_output.put_line('add constraint '||cname||q1.typ);
for q2 in c2 loop
dbms_output.put_line(q2.coln);
end loop;
dbms_output.put_line(')' ||q1.status);
dbms_output.put_line('/');
end loop;
end;
declare
cursor c1 is
select c.constraint_name,c.r_constraint_name cname2,
c.table_name table1, r.table_name table2,
decode(c.status,'DISABLED','DISABLE',' ') status,
decode(c.delete_rule,'CASCADE',' on delete cascade ',' ')
delete_rule
from user_constraints c,
user_constraints r
where c.constraint_type='R' and
c.r_constraint_name = r.constraint_name and
c.table_name = upper('&1')
union
select c.constraint_name,c.r_constraint_name cname2,
c.table_name table1, r.table_name table2,
decode(c.status,'DISABLED','DISABLE',' ') status,
decode(c.delete_rule,'CASCADE',' on delete cascade ',' ')
delete_rule
from user_constraints c,
user_constraints r
where c.constraint_type='R' and
c.r_constraint_name = r.constraint_name and
r.table_name = upper('&1');
cname varchar2(50);
cname2 varchar2(50);
cursor c2 is
select decode(position,1,'(',',')||rpad(column_name,40) colname
from user_cons_columns
where constraint_name = cname
order by position;
cursor c3 is
select decode(position,1,'(',',')||rpad(column_name,40) refcol
from user_cons_columns
where constraint_name = cname2
order by position;
begin
dbms_output.enable(100000);
for q1 in c1 loop
cname := q1.constraint_name;
cname2 := q1.cname2;
dbms_output.put_line('alter table '||q1.table1||' add constraint ');
dbms_output.put_line(cname||' foreign key');
for q2 in c2 loop
dbms_output.put_line(q2.colname);
end loop;
dbms_output.put_line(') references '||q1.table2);
for q3 in c3 loop
dbms_output.put_line(q3.refcol);
end loop;
dbms_output.put_line(') '||q1.delete_rule||q1.status);
dbms_output.put_line('/');
end loop;
end;
col c1 format a79 word_wrap
set long 32000
set arraysize 1
select 'create or replace trigger ' c1,
description c1,
'WHEN ('||when_clause||')' c1,
trigger_body ,
'/' c1
from user_triggers
where table_name = upper('&1') and when_clause is not null
select 'create or replace trigger ' c1,
description c1,
trigger_body ,
'/' c1
from user_triggers
where table_name = upper('&1') and when_clause is null
select 'alter trigger '||trigger_name||decode(status,'DISABLED','
DISABLE',' ENABLE')
from user_Triggers where table_name='&1';
set serveroutput on
declare
cursor c1 is
select 'alter table
'||'&1'||decode(substr(constraint_name,1,4),'SYS_',' ',
' add constraint ') a1,
decode(substr(constraint_name,1,4),'SYS_','
',constraint_name)||' check (' a2,
search_condition a3,
') '||decode(status,'DISABLED','DISABLE','') a4,
'/' a5
from user_constraints
where table_name = upper('&1') and
constraint_type='C';
b1 varchar2(100);
b2 varchar2(100);
b3 varchar2(32000);
b4 varchar2(100);
b5 varchar2(100);
fl number;
begin
open c1;
loop
fetch c1 into b1,b2,b3,b4,b5;
exit when c1%NOTFOUND;
select count(*) into fl from user_tab_columns where table_name =
upper('&1') and
upper(column_name)||' IS NOT NULL' = upper(b3);
if fl = 0 then
dbms_output.put_line(b1);
dbms_output.put_line(b2);
dbms_output.put_line(b3);
dbms_output.put_line(b4);
dbms_output.put_line(b5);
end if;
end loop;
end;
create or replace procedure dumzxcvreorg_dep(nam varchar2,typ
varchar2) as
cursor cur is
select type,decode(type,'PACKAGE BODY','PACKAGE',type) type1,
name from user_dependencies
where referenced_name=upper(nam) and referenced_type=upper(typ);
begin
dbms_output.enable(500000);
for c in cur loop
dbms_output.put_line('alter '||c.type1||' '||c.name||' compile;');
dumzxcvreorg_dep(c.name,c.type);
end loop;
end;
exec dumzxcvreorg_dep('&1','TABLE');
drop procedure dumzxcvreorg_Dep;
select 'grant '||privilege||' on '||table_name||' to '||grantee||
decode(grantable,'YES',' with grant option;',';') from
user_tab_privs where table_name = upper('&1');
select 'grant '||privilege||' ('||column_name||') on &1 to
'||grantee||
decode(grantable,'YES',' with grant option;',';')
from user_col_privs where grantor=user and
table_name=upper('&1')
order by grantee, privilege;
select 'create synonym '||synonym_name||' for
'||table_owner||'.'||table_name||';'
from user_synonyms where table_name=upper('&1');
PROMPT REM
PROMPT REM YOU MAY HAVE TO LOG ON AS SYSTEM TO BE
PROMPT REM ABLE TO CREATE ANY OF THE PUBLIC SYNONYMS!
PROMPT REM
select 'create public synonym '||synonym_name||' for
'||table_owner||'.'||table_name||';'
from all_synonyms where owner='PUBLIC' and table_name=upper('&1') and
table_owner=user;
prompt spool off
spool off
set echo on feed on verify on
The scripts REORGS1.SQL and REORGS2.SQL have been
created. Alter these script as necesarry.
To recreate the table-structure, first run REORGS1.SQL.
This script saves the content of your table in a table
called bk_.
If this script runs successfully run REORGS2.SQL.
The result is spooled to REORGTB.LST.
Check this file before dropping the bk_ table.
*/Please do NOT cross-postings: create a deep structure for dynamic internal table
Regards
Uwe -
ERROR in configuration:more elements in file csv structure than filed names
<p ct="TextView" class="urTxtStd" style="white-space:nowrap;">Hello,<br>we have problem with file content conversion on file (FTP) sender<br>adapter when reading flat delimited file.<br><br>Error:<br>Conversion of file content to XML failed at position 0:<br>java.lang.Exception: ERROR converting document line no. 2 according to<br>structure 'P':java.lang.Exception: ERROR in configuration: more<br>elements in file csv structure than field names specified!<br><br>Details:<br>We have windows machine and line in a file is ended with CRLF.<br>We have PI 7.0 SP10, and following pathches:<br>SAPXIAF10P_3-10003482<br>SAPXIAFC10P_4-10003481<br><br><br>Adapter Type: File<br>Sender<br>Transport Protocol: File Transfer Protocol (FTP)<br>Message Protocol: File Content Conversion<br>Adapter Engine: Integration Server<br><br>FTP Connection Parameters<br>Transfer Mode: Binary<br><br>Processing Parameters<br>File Type: Binary<br><br>Channel: IN_XXXXX_FILE_WHSCON<br><br>Input File: (WZ00008.DAT)<br>N|0025013638||0000900379|0000153226|2007-07-24|2007-07-24||||<br>P|000030|2792PL1|2303061|1|KRT|||||<br><br>Content Conversion Prameters:<br>Recordset Structure: N,1,P,<br>Recordset Sequence: Ascending<br><br>Key Field Name: KF<br>Key Field Type: String<br><br>N.fieldNames: N1,N2,N3,N4,N5,N6,N7,N8,N9,N10<br>N.fieldSeparator: |<br>N.endSeparator: 'nl'<br>N.processFieldNames: fromConfiguration<br>N.keyFieldValue: N<br><br>P.fieldNames: P1,P2,P3,P4,P5,P6,P7,P8,P9,P10<br>P.fieldSeparator: |<br>P.endSeparator: 'nl'<br>P.processFieldNames: fromConfiguration<br>P.keyFieldValue: P<br><br><br>At the same time we have another channel very similar to this on which<br>works:<br><br>Channel: IN_XXXXX_FILE<br><br>Input File: (PZ000015.DAT)<br>N|2005-11-25|13:01||<br>P|0570001988|2005|305|6797PL1|2511091|3500|SZT|2005-11-<br>25|1200|G002|1240|G002|||<br><br><br>Content Conversion Prameters:<br>Recordset Structure: N,1,P,<br>Recordset Sequence: Ascending<br><br>Key Field Name: KF<br>Key Field Type: String<br><br>N.fieldNames: N1,N2,N3,N4<br>N.fieldSeparator: |<br>N.endSeparator: 'nl'<br>N.processFieldNames: fromConfiguration<br>N.keyFieldValue: N<br><br>P.fieldNames: P1,P2,P3,P4,P5,P6,P7,P8,P9,P10,P11,P12,P13,P14,P15<br>P.fieldSeparator: |<br>P.endSeparator: 'nl'<br>P.processFieldNames: fromConfiguration<br>P.keyFieldValue: P<br><br>Converted file:<br><?xml version="1.0" encoding="utf-8"?><br><ns:PZ_MT xmlns:ns="<a href="http://xxxxx.yyyyy.hr">" target="_blank" title="Open this link in a new window">http://xxxxx.yyyyy.hr"></a><br><PZ><br> <N><br> <N1>N</N1><br> <N2>2005-11-25</N2><br> <N3>13:01</N3><br> <N4></N4><br> </N><br> <P><br> <P1>P</P1><br> <P2>0570001988</P2><br> <P3>2005</P3><br> <P4>305</P4><br> <P5>6797PL1</P5><br> <P6>2511091</P6><br> <P7>3500</P7><br> <P8>SZT</P8><br> <P9>2005-11-25</P9><br> <P10>1200</P10><br> <P11>G002</P11><br> <P12>1240</P12><br> <P13>G002</P13><br> <P14></P14><br> <P15></P15><br> </P><br></PZ><br></ns:PZ_MT><br><br>And, if we remove last delimiter before CRLF in WZ00008.DAT file then<br>file works, but we dont't have fields N10 and P10 in a XML converted<br>file.<br><br>Converted file:<br><?xml version="1.0" encoding="utf-8"?><br><ns:WZ_MT xmlns:ns="<a href="http://xxxxx.yyyyy.hr">" target="_blank" title="Open this link in a new window">http://xxxxx.yyyyy.hr"></a><br><WZ><br> <N><br> <N1>N</N1><br> <N2>0025013639</N2><br> <N3></N3><br> <N4>0000900379</N4><br> <N5>0000153226</N5><br> <N6>2007-08-01</N6><br> <N7>2007-08-01</N7><br> <N8></N8><br> <N9></N9><br> </N><br> <P><br> <P1>P</P1><br> <P2>000010</P2><br> <P3>0212PL1</P3><br> <P4>2007071</P4><br> <P5>1.000</P5><br> <P6>KRT</P6><br> <P7></P7><br> <P8></P8><br> <P9></P9><br> </P><br></WZ><br></ns:WZ_MT><br><br>Regards,<br>Mladen Kovacic</p>
Hello,
it seems that we have problem with SAP XI AF CPA Cache.
We make this changes and after this AF Cache stops working.
In the Visual Administrator, in service SAP XI AF CPA Cache, set the SLDAccess parameter to false
Save your entry and start the service
In service SAP XI AF CPA Cache, check that the cacheType parameter has the value DIRECTORY
In service SAP XI Adapter: XI, enter values for:
o xiadapter.isconfig.url - http://xidev:8038/sap/xi/engine?type=entry
o xiadapter.isconfig.username - XIAFUSER
o xiadapter.isconfig.password
o xiadapter.isconfig.sapClient - 001
o xiadapter.isconfig.sapLanguage - en
On the Integration Server, use transaction SMICM to check that you have entered the correct URL for the Integration Server.
On the Integration Server, use transaction SU01 to create a new user XIAFUSER
Assign the role SAP_XI_AF_SERV_USER_MAIN to the user XIAFUSER
In the Visual Administrator, check whether the user synchronization was successful
Use the new user to log on to the Integration Server and change the initial password to master password
Any idea for SAP XI AF CPA Cache update? -
How to check the records in Master Data Table?
Hi,
I am trying to load the Master Data Table using the Flat File.Now how to check the records in Master Data Table?
I done the following way:
Info Provider->Info Object->Right Click->Display Data or Maintain Master Data
But it's not showing the records.It's asking like CID from......To......
CID(SID)from.............To.......
here CID means customer id(characteristic).
and showing some settings.
Please guide me.
Thanks & RegardsHi Sri,
Go to T- code RSD1 and type your info object name and open the P- table in the infoobject then select execute symbol to see the updated data in to master data info object.
regards
sap
Maybe you are looking for
-
How much memory can i install on my macbook?
First time doing this on a macbook, i'm really confused trying to work out how old the darn thing is, i think it's mid 2007 but not sure. i cannot tell whether i can do 2gb or 4gb total. can anybody help. Model Name: MacBook Model Identifier: MacBook
-
Which is better for solving Mac G5 issues
I have been experiencing Kernel Panics and program crashes. I want something that is good for testing hardware and OS issues. Can either of these find corrupted data files? I am looking at two utilities for purchase. Which is better for solving Mac I
-
Download url for the sgd 4.6 release
hello all, can anyone please point me to the correct url to download the new sgd 4.6? the download link on the official web page (http://www.oracle.com/us/technologies/virtualization/oraclevm/061996.html) still points to the old 4.5.1 release. greeti
-
Obaccess.dll causes link error on hot deploys
I'm doing web services development with AccessServerSDK, Glassfish v3, on windows XP using Eclipse. It works fine except whenever I change my code and eclipse attempts to hot deploy the application, the following error occurs and the deploy bombs: "o
-
BI Scheduler services not starting
Hi, I am trying to use BI Scheduler for the first time. for this purpose i am following this link of Venkatakrishnan J http://oraclebizint.wordpress.com/2007/09/13/oracle-bi-ee-10133-configuring-delivers-ibots i have configured the first 5 steps but