Database design steps
Hi all,
I am new to learn the oracle database design. Please anyone can give me the steps/procedure that we can easily to design the new oracle database.?
regards,
samurai
Are you referring to physical database design (disk layouts, file locations etc.) or logical database design (schema, table structures, indexes etc.)?
Based on your requirement, the answer will change.
But simply stated, it would be better if you could take a course from local universities or colleges which have database design as part of IT curriculum. It is not possible to summarize database design in a few lines.
Similar Messages
-
Physical Database Design Steps & Performance Considerations
Hi,
We have a Oracle 9i install need help to create a DB.
Required to know the Physical Database Design Steps & Performance Considerations.
like
1-Technical consideration of DB as per server capacity. how to calculate..?
2- What will be the best design parameter for DB...?
Can you please help how to do that. Any metalink ID help to get those information.
thanks
kishorthere is SOOO much to consider . . . .
Just a FEW things are . . .
Hardware - What kind of Host is the database going to run on?
CPU and Memory
What kind of Storage
What is the Network like?
What is the database going to do OLTP or DW?
Start with your NEEDS and work to fulfill those needs on the budget given.
Since you say Physical Database Design . . . is your Logical Database Design done?
Does it fulfill the need of your application? -
Database design and pl/sql vs external procedures
hi,
My project involves predicting arrival time of a bus at a bus-stop, given statistical data of traffic patterns on the previous n(say 3) days, as well as the current location of the bus(latitude-longitude).
Given current bus location, I derive my distance-until-destination bus-stop, which must be translated into time until arrival.
Ive enlisted the triggers and procedures involved in making the prediction. Thse procedures especially the determination of perpendicular distances involve some complex trigonometric operations. I would like to know if my approach is correct and my database design is suited for the operations to b performed.
Will it be more efficient to implement the procedures as external procedures or as pl/sql blocks
This is my database design:
LINKS ( a link is the road segment between adjacent bus-stops)
LINK_ID NUMBER [PRIMARY-KEY]
START_LATITUDE NUMBER
START_LONGITUDE NUMBER
START_STOP_ID NUMBER
END_LATITUDE NUMBER
END_LONGITUDE NUMBER
END_STOP_ID NUMBER
LINK_LENGTH NUMBER
BUS_ROUTE
ROUTE_ID NUMBER
LINKS_ENROUTE VARRAY(30) OF NUMBER
STOPS_ENROOUTE VARRAY(30) OF NUMBER
TRACK(keeps track of current location of bus)
BUS_ID NUMBER [PRIMARY-KEY]
ROUTE VARCHAR2(20)
LATITUDE NUMBER
LONGITUDE NUMBER
TS TIMESTAMP
LINK_ID NUMBER
START_STOP NUMBER
END_STOP NUMBER
ARRIVAL_TIMES(actual arrival times of the bus, updated by track)
BUS_ID NUMBER [PRIMARY-KEY]
BUS_ROUTE VARCHAR2(20)
ARRIVAL TIMESTAMP
STOP_ID NUMBER
ETA (expected time of arrival)
BUS_ID NUMBER
BUS_ROUTE VARCHAR2(20)
BUS_STOP_ID NUMBER
ARR_TIME VARRAY(5) OF TIMESTAMP
Triggers and procedures
1)TRACK_TRIGGER
On insert/update of track determine which link the us is currently on.
Invoke a procedure that calculates perpendicular distance from current location to all links en-route (cursor on LINKS).
Results are stored in a temporary table. Select the link-id of the tuple whose perpendicular distance is MINIMUM. This is the link the bus is currently on. Place link-id, start_stop_id and end_stop_id in corresponding row of TRACK.
2)ARRIVAL_TRIGGER
update ARRIVAL_TIMES.and store in ARRIVAL_TIMES that start-stop id with the time-stamp of the current track record.
b)update ETA: Find the BUS-STOPS that come before the START_STOP of current track record. All these rows are deleted from the ETA tables, as the bus has already crossed these stops.
3)Prediction Algorithm Procedure.
Determine distance until destination for each STOP, 20 stops down from stop current location.
Determine current avg. speed of the bus over a 2 hour window, by dividing total distance traveled by time taken.
Calculate time-until arrival T1=current avg. speed * distance until destination
From the records of previous n days (say n=3) find those buses on the same route that were near the link the bus is currently on. Again determine avg speed over 2 hour window and calculate avg. speed.
Calculate travel time T(i) = speed*distance until destination. i.=2,3, 4
The final predicted arrival time is a weighted sum of all T(i).
I hope Im not asking for too much, but the help would be greatly appreciated.
Thankyou,
Aminahello,
actually i can manage ETA without a varray, since there will b a maxximum of 3 -4 values of expected arrival times at each stop. this can b done with separate columns.
though i dont quite understand how lag() will help me...from what i understand lag() is to access values of previous rows. but in ETA table each element in the varray(if there is one) is going to be the expected arrival time of buses on a particular route at that particular stop, and is different from the arrival time at a previous stop(i.e.row).
but for my other table BUS_ROUTE i have 2 varrays describing the links and stops enroute. in quite a few procedures i have to loop through these arrays and perform some calculations in every iteration is varray the best way 2 go, or nested tables?
Thank you
Amina
As an aside, external procedures tend by their very
nature to be slow - there's an overhead incurred
each time we step outside the database. Therefore
you really ought to avoid using a C extproc unless
your calculations really cannot be done in PL/SQL or
a Java Stored Procedure.
Also, before you go down the VARRAY route you should
consider the virues of analytic functions, notably
[url=http://download-west.oracle.com/docs/cd/B1050
1_01/server.920/a96540/functions56a.htm#83619]LAG()[/u
rl]. I think you really ought to do some
benchmarking of parformance before you start afdding
denormalised columns like ETA. You may find the
overhead in maintaining those columns exceeds their
perceived benefits.
Cheers, APC -
Need a sample Oracle Database Design Document
Looking for a sample database design document.
Which may include below high level steps if possible.
1. Hardware & software specifications.
2. Database Sizing/ estimate.
3. Schema Design
4. Data transformations/ feeds
Thanks
Mallikharjunauser8919741 wrote:
Looking for a sample database design document.
Which may include below high level steps if possible.
If you Google "database design document" you'll see that there are many templates available. I've never used any such document, and I think you'll be hard pressed to find one that fits your business needs exactly (assuming there are business needs behind your request).
1. Hardware & software specifications.
2. Database Sizing/ estimate.I would suggest you analyze existing workloads and base your estimates on that. If there is no existing workload you'll have to estimate one based on how many users you estimate will use the solution, or some other clever estimation method.
3. Schema Design This is a matter of translating business/application requirements to what needs to be stored in the relational database, and creating a data model based upon that. Every model will be different.
4. Data transformations/ feedsYou could simply document each transformation/feed and purpose.
Thanks
Mallikharjuna -
Hi,
I'm an DBA and i was assigned to design a new database(fresh) in oracle 11gR2 in windows platform for a project. Since, i'm new to database design, getting confusion on what perspective to approach and where to begin.
Can anyone please help on this.
Note: URL's are welcome.1011442 wrote:
Hi,
Thanks for providing the link.
The link that you sounds good and providing much info on considerations in creating a database and schema security.
But, i want the logical design part.
Assume that the database design for College Management System. It might have enrollment module, result module, staff module, etc., including more schema objects. I'm expecting the procedures or steps involving in such designs & how to design.
It starts with interviewing the end-user and determining their requirements .. especially flushing out all the entities (student, staff member, department, course, class, etc, etc, etc) and their attributes, then determining the relationship between said entities and attributes. Those entities, attributes, and relationships will be your logical database design. -
Database design to support parameterised interface with MS Excel
Hi, I am a novice user of SQL Server and would like some advice on how to solve a problem I have. (I hope I have chosen the correct forum to post this question)
I have created a SQL Server 2012 database that comprises approx 10 base tables, with a further 40+ views that either summarise the base table data in various ways, or build upon other views to create more complex data sets (upto 4 levels of view).
I then use EXCEL to create a dashboard that has multiple pivot table data connections to the various views.
The users can then use standard excel features - slicers etc to interrogate the various metrics.
The underlying database holds a single days worth of information, but I would like to extend this to cover multiple days worth of data, with the excel spreadsheet having a cell that defines the date for which information is to
be retrieved.(The underlying data tables would need to be extended to have a date field)
I can see how the excel connection string can be modified to filter the results such that a column value matches the date field,
but how can this date value be passed down through all the views to ensure that information from base tables is restricted for the specied date, rather than the final results set being passed back to excel - I would rather not have the server resolve the views
for the complete data set.
I considered parameterisation of views, but I dont believe views support parameters, I also considered stored procedures, but I dont believe that stored procedures allow result sets to be used as pseudo tables.
What other options do I have, or have I failed to grasp the way SQL server creates its execution plans and simply having the filter at the top level will ensure the result set is minimised at the lower level? (I dont really want the time taken for the dashboard
refresh to increase - it currently takes approx 45 seconds following SQL Server Engine Tuning Advisor recommendations)
As an example of 3 of the views,
Table A has a row per system event (30,000+ per day), each event having an identity, a TYPE eg Arrival or Departure, with a time of event, and a planned time for the event (a specified identity will have a sequence of Arrival and Departure events)
View A compares seperate rows to determine how long between the Arrival and Departure events for an identity
View B compares seperate rows to determine how long between planned Arrival and Departure events for an identity
View C uses View A and view B to provide the variance between actual and planned
Excel dashboard has graphs showing information retrieved from Views A, B and C. The dashboard is only likely to need to query a single days worth of information.
Thanks for your time.You are posting in the database design forum but it seems to me that you have 2 separate but highly dependent issues - neither of which is really database design related at this point. Rather you have an user interface issue and an database programmability
issue. Those I cannot really address since much of that discussion requires knowledge of your users, how they interface with the database, what they use the data for, etc. In addition, it seems that Excel is the primary interface for your users
- so it may be that you should post your question to an excel forum.
However, I do have some comments. First, views based on views is generally a bad approach. Absent the intention of indexing (i.e., materializing) the views, the db engine does nothing different for a view than it does for any ad-hoc query.
Unfortunately, the additional layering of logic can impede the effectiveness of the optimizer. The more complex your views become and the deeper the layering, the greater the chance that you befuddle the optimizer.
I would rather not have the server resolve the views for the complete data set
I don't understand the above statement but it scares me. IMO, you DO want the server to do as much work as possible since it is closest to the data and has (or should have) the resources to access and manipulate the data and generate the desired
results. You DON'T want to move all the raw data involved in a query over the network and into the client machine's storage (memory or disk) and then attempt to compute the desired values.
I considered parameterisation of views, but I dont believe views support parameters, I also considered stored procedures, but I dont believe that stored procedures allow result sets to be used as pseudo tables.
Correct on the first point, though there is such a thing as a TVF which is similar in effect. Before you go down that path, let's address the second statement. I don't understand that last bit about "used as pseudo tables" but that sounds more
like an Excel issue (or maybe an assumption). You can execute a stored procedure and use/access the resultset of this procedure in Excel, so I'm not certain what your concern is. User simplicity perhaps? Maybe just a terminology issue? Stored
procedures are something I would highly encourage for a number of reasons. Since you refer to pivoting specifically, I'll point out that sql server natively supports that function (though perhaps not in the same way/degree Excel does). It
is rather complex tsql - and this is one reason to advocate for stored procedures. Separate the structure of the raw data from the user.
(I dont really want the time taken for the dashboard refresh to increase - it currently takes approx 45 seconds following SQL Server Engine Tuning Advisor recommendations)
DTA has its limitations. What it doesn't do is evaluate the "model" - which is where you might have more significant issues. Tuning your queries and indexing your tables will only go so far to compensate for a poorly designed schema (not that
yours is - just a generalization). I did want to point out that your refresh process involves many factors - the time to generate a resultset in the server (including plan compilation, loading the data from disk, etc.), transmitting that data over the
network, receiving and storing the resultset in the client application, manipulating the resultset into the desired form/format), and then updating the display. Given that, you need to know how much time is spent in each part of that process - no sense
wasting time optimizing the smallest time consumer.
So now to your sample table - Table A. First, I'll give you my opinion of a flawed approach. Your table records separate facts about an entity as multiple rows. Such an approach is generally a schema issue for a number of reasons.
It requires that you outer join in some fashion to get all the information about one thing into a single row - that is why you have a view to compare rows and generate a time interval between arrival and departure. I'll take this a step further and assume
that your schema/code likely has an assumption built into it - specifically that a "thing" will have no more than 2 rows and that there will only be one row with type "arrival" and one row with type "departure". Violate that assumption and things begin to
fall apart. If you have control over this schema, then I suggest you consider changing it. Store all the facts about a single entity in a single row. Given the frequency that I see this pattern, I'll guess that you
cannot. So let's move on.
30 thousand rows is tiny, so your current volume is negligible. You still need to optimize your tables based on usage, so you need to address that first. How is the data populated currently? Is it done once as a batch? Is it
done throughout the day - and in what fashion (inserts vs updates vs deletes)? You only store one day of data - so how do you accomplish that specifically? Do you purge all data overnight and re-populate? What indexes
have you defined? Do all tables have a clustered index or are some (most?) of them heaps? OTOH, I'm going to guess that the database is at most a minimal issue now and that most of your concerns are better addressed at the user interface
and how it accesses your database. Perhaps now is a good time to step back and reconsider your approach to providing information to the users. Perhaps there is a better solution - but that requires an understanding of your users, the skillset of
everyone involved, what you have to work with, etc. Maybe just some advanced excel training? I can't really say and it might be a better question for a different forum.
One last comment - "identity" has a special meaning in sql server (and most database engines I'm guessing). So when you refer to identity, do you refer to an identity column or the logical identity (i.e., natural key) for the "thing" that Table A is
attempting to model? -
Suggestion: Create a Database Design Forum
I recommend the creation of a new forum dealing exclusively with database design questions, such as setting Primary Keys, Unique constraints, Check constraints, Indexes, schema-creation scripts, etc. There is no forum devoted exclusively to this topic now and I feel it would be very helpful to the user community. It would certainly make searching for answers to design questions much easier.
Billy Verreynne wrote:
Prohan wrote:
I don't agree there.
1. How to create a relational model certainly IS relevant to Oracle, which is a RELATIONAL DBMS.Oracle also supports data warehousing (star schema designs), network/hierarchical designs, object-relational designs - or pretty much any data model that you may come up with. Calling it just a relational DBMS is incorrect.
2. Your point that logical models are independent of specific technology is correct. What you're missing is that if a specific technology makes use of a certain foundational body of knowledge, that knowledge is a legitimate topic for a forum whose users use that specific technology.That is putting the cart in front of the horse IMO.
I would rather see data modeling and logical database design being done in a way that is untainted with specific vendor implementations and technology used. There needs to be a clear line dividing the design from the implementation. If not, then design decisions can (and will) be made based not on the correct logical data modeling principles, but whether it can be "handled" by the technology. A design that is tainted like that, will always be less than optimal (especially as technology is continually evolving and changing).
An OTN forum for database design will invariable be tainted with Oracle technology - and instead of learning sound data modeling fundamentals, a warped view of data modeling will be conveyed. Where doing abc will be acceptable (when it is not), because Oracle has feature xyz that can make the flawed design work (in a fashion).Excellent points. I think (or at least hope) such a forum would attract some number of pure theorists to straighten out the view. This might make for a lively forum, and might actually influence the real products, and might even get the cart on the right side of the horse.
Hmmm, I guess I do sound hopelessly optimistic. -
Hi Experts,
IF Auto Update Statistics ENABLED in Database Design, Why we need to Update Statistics as a maintenance plan for Daily/weekly??
Vinai Kumar GandlaHi Vikki,
Many systems rely solely on SQL Server to update statistics automatically(AUTO UPDATE STATISTICS enabled), however, based on my research, large tables, tables with uneven data distributions, tables with ever-increasing keys and tables that have significant
changes in distribution often require manual statistics updates as the following explanation.
1.If a table is very big, then waiting for 20% of rows to change before SQL Server automatically updates the statistics could mean that millions of rows are modified, added or removed before it happens. Depending on the workload patterns and the data,
this could mean the optimizer is choosing a substandard execution plans long before SQL Server reaches the threshold where it invalidates statistics for a table and starts to update them automatically. In such cases, you might consider updating statistics
manually for those tables on a defined schedule (while leaving AUTO UPDATE STATISTICS enabled so that SQL Server continues to maintain statistics for other tables).
2.In cases where you know data distribution in a column is "skewed", it may be necessary to update statistics manually with a full sample, or create a set of filtered statistics in order to generate query plans of good quality. Remember,
however, that sampling with FULLSCAN can be costly for larger tables, and must be done so as not to affect production performance.
3.It is quite common to see an ascending key, such as an IDENTITY or date/time data types, used as the leading column in an index. In such cases, the statistic for the key rarely matches the actual data, unless we update the Statistic manually after
every insert.
So in the case above, we could perform manual statistics updates by
creating a maintenance plan that will run the UPDATE STATISTICS command, and update statistics on a regular schedule. For more information about the process, please refer to the article:
https://www.simple-talk.com/sql/performance/managing-sql-server-statistics/
Regards,
Michelle Li -
Logical Database design and physical database implementation
Hi
I am an ORACLE DBA basically and we started a proactive server dashboard portal ,which basically reports all aspects of our infrastructure (Dev,QA and Prod,performance,capacity,number of servers,No of CPU,decomissioned date,OS level,Database patch level) etc..
This has to be done entirely by our DBA team as this is not externally funded project.Now i was asked to do " Logical Database design and physical Database
implementation"
Even though i know roughly what's that mean(like designing whole set of tables in star schema format) ,i have never done this before.
In my mind i have a rough set of tables that can be used but again i think there is lot of engineering involved in this area to make sure that we do it properly.
I am wondering you guys might be having some recommendations for me in the sense where to start?are there any documents online , are there any book on this topic?Are there any documents which explain this phenomena with examples ?
Also exactly what is the difference between logical database design vs physical database implementation
Thanks and RegardsLogical database design is the process of taking a business or conceptual data model (often described in the form of an Entity-Relationship Diagram) and transforming that into a logical representation of that model using the specific semantics of the database management system. In the case of an RDBMS such as Oracle, this representation would be in the form of definitions of relational tables, primary, unique and foreign key constraints and the appropriate column data types supported by the RDBMS.
Physical database implementation is the process of taking the logical database design and translating that into the actual DDL statements supported by the target RDBMS that will create the database objects in a target RDBMS database. This will generally include specific physical implementation details such as the specification of tablespaces, use of specialised indexing (bitmap, clustered etc), partitioning, compression and anything else that relates to how data will actually be physically stored inside the database.
It sounds like you already have a physical implementation? If so, you can reverse engineer this implementation into a design tool such as SQL Developer Data Modeller. This will create a logical design by examining the contents of the Oracle data dictionary. Even if you don't have an existing database, Data Modeller is a good tool to use as a starting point for logical and even conceptual/business models.
If you want to read anything about logical design, "An Introduction to Database Systems" by Date is always a good starting point. "Database Systems - A Practical Approach to Design, Implementation and Management" by Connolly & Begg is also an excellent reference. -
Help with a database design for community housing project
Talking database design
Hi all, I have been wondering about the design of tables for a big block of residential units. There are 100 + rooms. there are about 25 houses in this complex but the 100 + rooms are all rented out separately.
Its just like a college campus really but its not a college campus , its a little unique.
The rents are applied as a percentage of income, so thats not common, so I included a tblRoomCost where the pre-calculated weekly cost is entered, and its got a date field in there for when the change of rent charged. I probably need to include an income field in tblCustomer, even just as an Admin reference.
So is this looking pretty ok and would there be any point in scrapping the database and using text files ?
So what do you think of these tables please ?
tblCustomer
pkCustomer, fldFirstName, fldLastname
tblRoomAllocation
pkRoomID, fkCustomer
tblRoomCost
pkRoomID, fldDate, fldRoomCost
tblTransactionID
pkTransactionID, fldDate, fldTransactionType, fkCustomer, fldAmountThe naming scheme is one I learned and havn't thought past it though I do get into trouble and your suggestion may prove useful when codeing !
I thought the tblRoomAllocation and tblRoomCost took care of changing. Though I see now that tbleRoomAllocation needs a Date field really. And the tblRoomCost has a fldRoomCost which isnt really that good an implementation as the rooms themselves are priced always accoedijng to the income of a resident and not because of the room. So the real world object is getting fudgy........
It is extremely unlikly that Admin would ever allow two rooms to be rented by an individual.
I will have a look at possibly your suggestion that an Accounts table be used.
Also I thought about having a startDate and EndDate in the tblTransaction to represent the period being paid for. Just seems like a lot of Dates in a transaction table. One to record the transaction, the other two dates to indicate the period that was actually being paid for. ? When perhaps I should work that out in the runtime code.v ? This will be a VB.Net app.
Do you think there is a need for Accounts table if only one room is permitted to be rented , though room changing may be common?
And thx for your input.
Message was edited by:
user521381 -
New database design product - ModelRight for Oracle
Whether you are a beginner or an expert data modeler, ModelRight for Oracle is the database design tool of choice for Oracle. Here are some of the things that make ModelRight for Oracle unique:
• Extensive support for Oracle – support and advanced features - OR types, object tables, object views, materialized views, index-organized tables, clusters, partitions, function-based indexes, etc...
• full Forward, Reverse and Compare capabilities
• Unique User Interface and Diagrammatic elements: with our mode-less and hyperlinked user-interface, navigating and editing your model is intuitive and easy.
• Extensive use of Domains: you can create Domains for just about every type of object to propagate patterns, reusability & classification.
• Unprecedented level of programmatic control: you can control the smallest details of the FE and Alter Script generation process.
Please check out our website at http://www.modelright.com and download the free trial version.
Please let me know if you have any suggestions or comments.
Thank you,
Tim Guinther
Founder, ModelRight, Inc.
[email protected]
(215) 534 5282Excellent product. Pretty impressive. Gorgeous diagrams and sophisticated reports. Loved the myriad of navigation ways and non-obtrusive modeless dialogs. Very easy to use!
Keep the good work up. -
Hi,
I have got some doubt in database design. I am designing a database for Inventory Management. Wherein i need to store the order details in one to many relationship. I have designed my tables as follows (Just sample)
-- Tbl_OrderOne
OrderCode (PK)
OrderDate
CustomerInfo
-- Tbl_OrderMany
OrderCode (FK)
ItemCode
Qty
What doubt i have got is, since oracle has the features like creating objects and use them as data type and store the data.
Can i use this feature in this case. Here in this case trasactions will be very high. Does it affect the performance. But presently i have designed it using foreign key reference as i showed you in the beginning. If using the Objects to store that huge number of data is feasible and increases the performance, then i can go for Object type feature.
Thanx in Advance
Regards
Vinayak
nullVinayak,
One more thing. You can check out more information about NESTED TABLE at http://otn.oracle.com/docs/products/oracle8i/doc_library/817_doc/appdev.817/a76976/adobjdes.htm#446526
Regards,
Geoff -
Re: Database design software ?
The 'original' database development program is still being
developed with a
pedigree of over 40years
www.dbase.com
Regards
Cliff
Cliff Rielly"Chris Seymour" <[email protected]> wrote in message
news:f1f96g$mq3$[email protected]..
> Was curious to see what database design software people
were using?
> I have been looking around and have some ideas but I
like to check what
> others are using.
If you are looking for a generic db design tool, and not for
a database GUI,
Case Studio is worth a try:
www.casestudio.com/
Massimo Foti, web-programmer for hire
Tools for ColdFusion and Dreamweaver developers:
http://www.massimocorner.com -
Re: (forte-users) Round-trip database design
We have used Erwin quite sucessfully, but it's not cheap.
"Rottier, Pascal" <Rottier.Pascalpmintl.ch> on 02/15/2001 04:51:01 AM
To: 'Forte Users' <forte-userslists.xpedior.com>
cc:
Subject: (forte-users) Round-trip database design
Hi,
Maybe not 100% the right mailing list but it's worth a try.
Does anyone use tools to automatically update the structure of an existing
database?
For example, you have a full database model (Power Designer) and you've
created a script to create all these tables in a new and empty database.
You've been using this database and filling tables with data for a while.
Now you want to do some marginal modifications on these tables. Add a
column, remove a column, rename a column, etc.
Is there a way to automatically change the database without losing data and
without having to do it manually (except the manual changes in the (Power
Designer) model).
Thanks
Pascal Rottier
Atos Origin Nederland (BAS/West End User Computing)
Tel. +31 (0)10-2661223
Fax. +31 (0)10-2661199
E-mail: Pascal.Rottiernl.origin-it.com
++++++++++++++++++++++++++++
Philip Morris (Afd. MIS)
Tel. +31 (0)164-295149
Fax. +31 (0)164-294444
E-mail: Rottier.Pascalpmintl.ch
For the archives, go to: http://lists.xpedior.com/forte-users and use
the login: forte and the password: archive. To unsubscribe, send in a new
email the word: 'Unsubscribe' to: forte-users-requestlists.xpedior.comHello Pascal,
Forte has classes which might be able to scan the database structure
(DBColumnDesc,etc.). Express use this classes to determine how the
BusinessClass looks like. We use Forte to create the tables,indexes and
constraints. We have the Problem that the above described classes are only
readable but not fillable. The solution for us will be to create our own
classes in
the same manner than existing classes are. So we are able to make updates in
the database structure and maybe able to change the database tables with tool
code. Another reason for us to have the database structure in the
application is the
ability to see the table structure on which the Forte code works always up
to date
with the code. You are always able to compare the structure of the database
with
your businessclasses and able to convert a wrong structure to the correct
structure
with maybe just a little piece of code.
Hope this helps
Joseph Mirwald -
Error in load database content step
Hi All,
I am having a problem in step 22, the "Load Java database content" step, in the Netweaver Sneak Preview SP15 installation procedure.
It seems that the program that is wanting to connect to the J2E database is failing. The details of the log browser are below.
ERROR 2006-01-16 18:13:04
CJS-20065 Execution of JLoad tool 'C:\Java\j2sdk1.4.2_08/bin/java.exe '-classpath' './sharedlib/antlr.jar;./sharedlib/exception.jar;./sharedlib/jddi.jar;./sharedlib/jload.jar;./sharedlib/logging.jar;./sharedlib/offlineconfiguration.jar;./sharedlib/opensqlsta.jar;./sharedlib/tc_sec_secstorefs.jar;d:\sapdb\programs\runtime\jar\sapdbc.jar;C:/usr/sap/J2E/SYS/global/security/lib/tools/iaik_jce.jar;C:/usr/sap/J2E/SYS/global/security/lib/tools/iaik_jsse.jar;C:/usr/sap/J2E/SYS/global/security/lib/tools/iaik_smime.jar;C:/usr/sap/J2E/SYS/global/security/lib/tools/iaik_ssl.jar;C:/usr/sap/J2E/SYS/global/security/lib/tools/w3c_http.jar' '-Duser.timezone=Europe/Berlin' '-showversion' '-Xmx512m' 'com.sap.inst.jload.Jload' '-sec' 'J2E,jdbc/pool/J2E,C:\usr\sap\J2E\SYS\global/security/data/SecStore.properties,C:\usr\sap\J2E\SYS\global/security/data/SecStore.key' '-dataDir' 'D:/nwunrar/NWSneakPreviewSP15/SAP_NetWeaver_04_SR_1_Installation_Master\IM01_NT_I386\..\..\SneakPreviewContent\JDMP' '-job' 'C:\Program Files\sapinst_instdir\NW04SR1\WEBAS_COPY\ONE_HOST/IMPORT.XML' '-log' 'C:\Program Files\sapinst_instdir\NW04SR1\WEBAS_COPY\ONE_HOST/jload.log'' aborts with returncode 1. Check 'C:\Program Files\sapinst_instdir\NW04SR1\WEBAS_COPY\ONE_HOST/jload.log' and 'C:\Program Files\sapinst_instdir\NW04SR1\WEBAS_COPY\ONE_HOST/jload.java.log' for more information.
The contents of the file jload.log/jload.java.log are almost the same. Listed below is the error mentioned there.
17.01.06 00:13:04 com.sap.inst.jload.Jload logStackTrace
SEVERE: com.sap.dbtech.jdbc.exceptions.JDBCDriverException: SAP DBTech JDBC: Cannot connect to jdbc:sapdb://labnw/J2E [Restart required].
at com.sap.dbtech.jdbc.DriverSapDB.connect(DriverSapDB.java:183)
at com.sap.sql.jdbc.NativeConnectionFactory.createNativeConnection(NativeConnectionFactory.java:219)
at com.sap.sql.connect.OpenSQLDataSourceImpl.createConnection(OpenSQLDataSourceImpl.java:500)
at com.sap.sql.connect.OpenSQLDataSourceImpl.getConnection(OpenSQLDataSourceImpl.java:254)
at com.sap.inst.jload.db.DBConnection.connectViaSecureStore(DBConnection.java:105)
at com.sap.inst.jload.db.DBConnection.connect(DBConnection.java:149)
at com.sap.inst.jload.Jload.main(Jload.java:580)
Thank you for your help in advance.
Regards
Sumit.I changed the timezone for the installation to the German timezone, and restarted the installation from the start - after uninstalling everything, and dropping the database.
Everything went through.
Thank You.
Sumit.
Maybe you are looking for
-
What is the best tool to create custom transaction on SRM 4?
Hi. We are using SRM 4 and will need to build a new transaction, that users need to run from the ITS, that will allow the users to enter data and then send it to SRM when they submit the data. What is the best way of doing this? I don't want to use t
-
I dont necessarily have an issue with AT&T moving the due date up, however it would have been nice of them to give me a litthe heads up about it. I only found out after receiving my bill. It's a good thing i reviewed my bill otherwise I would have as
-
Connecting AirPort Extreme and AirPort Express
Hi, I have a windows 7 pc connected to the internet with a dsl modem/router (technicolor TG587n v3 made by Thomson) and a Mac computer that gets connected wirelessly, but now I have purchased an AirPort Extreme and an AirPort Express and I want to co
-
PSE12 Camera Raw / Nikon D7100
I´m using an iMac with Yosemite 10.10. I´ve Photoshop Elements 12.1 with Camera Raw 8.5.0.236 installed. I can´t open my Nikon D7100 NEF-files. It says that my camera files are not supported...!??? I´ve search all over Adobe.com and the Internet and
-
Hi All, I have done Recoerding for KB15N Transaction for a BDC Program. When i run the BDC in ALL screen mode using CALL TRANSACTION, the first filed in the Transaction 'Controlling Area' will not get the data from the BDCDATA table. While recording