Best practice timezone
What is the best practice for timezones on routers and switches? Should I make them all the same, say EST? Since this is where all the management servers reside or should they be set to where they are located? Should I be using GMT/UTC time?
Comments
I think it makes the most sense that you set each network device to it's local timezone. What I think is more important is that you use NTP and get consistent time across all of your devices.
Similar Messages
-
Looking for customization best practices related to Timezone feature in EBS
Hi all,
We are updating our 11.5.10.2 env to deploy the TImezone feature and i'm anticipating some issues for the customized code as i'm not sure the developers followed best practices about handling dates.
Is there any documents where i can find the best practices to follow for customized coding in order to be compliant with the EBS Timezone feature ?
Like for instance, Date APIs usage or the work to do to convert Date data for interfaces ?
Thanks in advance
ChrisIs there any documents where i can find the best practices to follow for customized coding in order to be compliant with the EBS Timezone feature ?
Like for instance, Date APIs usage or the work to do to convert Date data for interfaces ?I do not think such a doc exists.
You may review these docs and see if it helps.
User Preferred Time Zone Support Guidelines for Applications 11i10CU2 [ID 330075.1]
User-Preferred Time Zone Support in Oracle E-Business Suite Release 12 [ID 402650.1]
Globalization Guide for Oracle Applications Release 12 [ID 393861.1]
Also, please log a SR to confirm the existence of the doc you are looking for.
Thanks,
Hussein -
Best practice for default values in EO
I have and entity called AUTH_USER (a user table) within it has 2 TIMESTAMP WITH TIME ZONE columns like this ...:
EFF_DATE TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT current_timestamp,
TERM_DATE TIMESTAMP WITH TIME ZONE
Notice EFF_DATE has a default constraint and is not nullable.
In the EO, EFF_DATE is represented as a TIMESTAMPTZ and is checked as MANDATORY in its attribute properties. I cannot commit a NEW RECORD based on VO derived from this EO because of the MANDATORY constraint that is set in the EFF_DATE attribute's properties unless I enter a value. My original strategy was to have the field populated by a DEFAULT DATE if the user should attempt to leave this field null.
This is my deli ma.
1. I could have the database populate the value based on the default constraint in the table definition. Since EFF_DATE and TERM_DATE resemble the Effective Date (Start, End) properties that the framework already provides then I could set both fields as Effective Date (Start, End) and then check Refresh After Insert. But this still won't work unless I deselect the mandatory property on EFF_DATE.
2. The previous solution would work. However, I'm not sure that it is part of a "Best Practices" solution. In my mind if a database column is mandatory in the database then it should be mandatory in the Model as well.
3. If the first option is a poor choice, then what I need to do is to leave the attribute defined and mandatory and have a DEFAULT VALUE set in the RowImpl create method.
4. Finally, I could just force the user to enter a value. That would seem to be the common sense thing to do. I mean that's what calendar widgets and AJAX enabled JSF are for!
Regardless to what the correct answer is, I'd like to see some sample code of how the date can be populated inside the RowImpl create method and it pass to setEffDate(TimestampTZ dt). Keep in mind though that in this instance I need the timezone at the database server side and not the client side. I would also ask for advice on doing this with Groovy Scripting or expressions.
And finally, what is the best practice in this situation?
Thanks in advance.How about setting the default value property of the attribute in the EO to be adf.currentDate ?
(assuming you are using 11g).
This way there is a default date being set when the record is created and the user can change it if he wants to. -
Best practice for ASA Active/Standby failover
Hi,
I have configured a pair of Cisco ASA in Active/ Standby mode (see attached). What can be done to allow traffic to go from R1 to R2 via ASA2 when ASA1 inside or outside interface is down?
Currently this happens only when ASA1 is down (shutdown). Is there any recommended best practice for such network redundancy? Thanks in advanced!Hi Vibhor,
I test ping from R1 to R2 and ping drop when I shutdown either inside (g1) or outside (g0) interface of the Active ASA. Below is the ASA 'show' failover' and 'show run',
ASSA1# conf t
ASSA1(config)# int g1
ASSA1(config-if)# shut
ASSA1(config-if)# show failover
Failover On
Failover unit Primary
Failover LAN Interface: FAILOVER GigabitEthernet2 (up)
Unit Poll frequency 1 seconds, holdtime 15 seconds
Interface Poll frequency 5 seconds, holdtime 25 seconds
Interface Policy 1
Monitored Interfaces 3 of 60 maximum
Version: Ours 8.4(2), Mate 8.4(2)
Last Failover at: 14:20:00 SGT Nov 18 2014
This host: Primary - Active
Active time: 7862 (sec)
Interface outside (100.100.100.1): Normal (Monitored)
Interface inside (192.168.1.1): Link Down (Monitored)
Interface mgmt (10.101.50.100): Normal (Waiting)
Other host: Secondary - Standby Ready
Active time: 0 (sec)
Interface outside (100.100.100.2): Normal (Monitored)
Interface inside (192.168.1.2): Link Down (Monitored)
Interface mgmt (0.0.0.0): Normal (Waiting)
Stateful Failover Logical Update Statistics
Link : FAILOVER GigabitEthernet2 (up)
Stateful Obj xmit xerr rcv rerr
General 1053 0 1045 0
sys cmd 1045 0 1045 0
up time 0 0 0 0
RPC services 0 0 0 0
TCP conn 0 0 0 0
UDP conn 0 0 0 0
ARP tbl 2 0 0 0
Xlate_Timeout 0 0 0 0
IPv6 ND tbl 0 0 0 0
VPN IKEv1 SA 0 0 0 0
VPN IKEv1 P2 0 0 0 0
VPN IKEv2 SA 0 0 0 0
VPN IKEv2 P2 0 0 0 0
VPN CTCP upd 0 0 0 0
VPN SDI upd 0 0 0 0
VPN DHCP upd 0 0 0 0
SIP Session 0 0 0 0
Route Session 5 0 0 0
User-Identity 1 0 0 0
Logical Update Queue Information
Cur Max Total
Recv Q: 0 9 1045
Xmit Q: 0 30 10226
ASSA1(config-if)#
ASSA1# sh run
: Saved
ASA Version 8.4(2)
hostname ASSA1
enable password 2KFQnbNIdI.2KYOU encrypted
passwd 2KFQnbNIdI.2KYOU encrypted
names
interface GigabitEthernet0
nameif outside
security-level 0
ip address 100.100.100.1 255.255.255.0 standby 100.100.100.2
ospf message-digest-key 20 md5 *****
ospf authentication message-digest
interface GigabitEthernet1
nameif inside
security-level 100
ip address 192.168.1.1 255.255.255.0 standby 192.168.1.2
ospf message-digest-key 20 md5 *****
ospf authentication message-digest
interface GigabitEthernet2
description LAN/STATE Failover Interface
interface GigabitEthernet3
shutdown
no nameif
no security-level
no ip address
interface GigabitEthernet4
nameif mgmt
security-level 0
ip address 10.101.50.100 255.255.255.0
interface GigabitEthernet5
shutdown
no nameif
no security-level
no ip address
ftp mode passive
clock timezone SGT 8
access-list OUTSIDE_ACCESS_IN extended permit icmp any any
pager lines 24
logging timestamp
logging console debugging
logging monitor debugging
mtu outside 1500
mtu inside 1500
mtu mgmt 1500
failover
failover lan unit primary
failover lan interface FAILOVER GigabitEthernet2
failover link FAILOVER GigabitEthernet2
failover interface ip FAILOVER 192.168.99.1 255.255.255.0 standby 192.168.99.2
icmp unreachable rate-limit 1 burst-size 1
asdm image disk0:/asdm-715-100.bin
no asdm history enable
arp timeout 14400
access-group OUTSIDE_ACCESS_IN in interface outside
router ospf 10
network 100.100.100.0 255.255.255.0 area 1
network 192.168.1.0 255.255.255.0 area 0
area 0 authentication message-digest
area 1 authentication message-digest
log-adj-changes
default-information originate always
route outside 0.0.0.0 0.0.0.0 100.100.100.254 1
timeout xlate 3:00:00
timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02
timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00
timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00
timeout sip-provisional-media 0:02:00 uauth 0:05:00 absolute
timeout tcp-proxy-reassembly 0:01:00
timeout floating-conn 0:00:00
dynamic-access-policy-record DfltAccessPolicy
user-identity default-domain LOCAL
aaa authentication ssh console LOCAL
http server enable
http 10.101.50.0 255.255.255.0 mgmt
no snmp-server location
no snmp-server contact
snmp-server enable traps snmp authentication linkup linkdown coldstart warmstart
telnet timeout 5
ssh 10.101.50.0 255.255.255.0 mgmt
ssh timeout 5
console timeout 0
tls-proxy maximum-session 10000
threat-detection basic-threat
threat-detection statistics access-list
no threat-detection statistics tcp-intercept
webvpn
username cisco password 3USUcOPFUiMCO4Jk encrypted
prompt hostname context
no call-home reporting anonymous
call-home
profile CiscoTAC-1
no active
destination address http https://tools.cisco.com/its/service/oddce/services/DDCEService
destination address email [email protected]
destination transport-method http
subscribe-to-alert-group diagnostic
subscribe-to-alert-group environment
subscribe-to-alert-group inventory periodic monthly
subscribe-to-alert-group configuration periodic monthly
subscribe-to-alert-group telemetry periodic daily
crashinfo save disable
Cryptochecksum:fafd8a885033aeac12a2f682260f57e9
: end
ASSA1# -
Best practice to store date in database
which best practices do you know to do this?
Snoopybad wrote:
which best practices do you know to do this?1. Understand the business requirements that driver the need to store such a value.
2. Understand the difference between 'time', 'date' and 'timestamp' (date and time) and also understand that a time interval (measurement of passing time) is not the same as any of those.
3. Understand what timezone means exactly
4. Look at the database itself to understand exactly how it stores such values. This includes insuring that you understand exactly how the timezone is handled. -
Best practices for a multi-language application
Hi,
I'm planning to develop an application to work in two different countries and I'm hopping to get some feedback from this community on the best practices to follow when building the application. The application will run in two different languages (English and French) and in two different timezones;
My doubts:
- Wich type format is more appropriated to my table date fields?
- I will build the application on english language. Since APEX has the french language for the admin frontend, how can I install it and can I reuse the translation to my applications? The interactive reports region are somewhere translated into french, how can I access that translation and use it in my application?
Thank youHello Cao,
>> The application will run in two different languages (English and French) and in two different timezones …. It would be very helpful if I could access at least the translation of the IRR regions.
As you mentioned, French is one of the native supported languages by the Application Builder. As such, all the internal APEX engine messages (including those for IR) were translated to French. In order to enjoy it you need to upload the French language into your Application Builder. The following shows you how to do that:
http://download.oracle.com/docs/cd/E23903_01/doc/doc.41/e21673/otn_install.htm#BEHJICEB
In your case, the relevant directory is *…/apex/builder/fr/ *. Please pay attention to the need to set properly the NLS_LANG parameter.
>> Wich type format is more appropriated to my table date fields?
I’m not sure exactly what you mean by that. Date fields are saved in the database format free and it’s up to you to determine how to display them, usually by using the to_char() function.
As you mentioned that you are going to work with two different time zones, it’s possible that the date format for these two zones are different. In this case, you can use the APEX Globalization Application Date Time Format item. As the help for this item shows, you can use a substitution string as the item value, and you can set the value of the substitution string according to the current language and its corresponding date format.
You should also set the Automatic Time Zone field to yes. It will make your life a bit easier.
Regards,
Arie.
♦ Please remember to mark appropriate posts as correct/helpful. For the long run, it will benefit us all.
♦ Author of Oracle Application Express 3.2 – The Essentials and More -
Logical level in Fact tables - best practice
Hi all,
I am currently working on a complex OBIEE project/solution where I am going straight to the production tables, so the fact (and dimension) tables are pretty complex since I am using more sources in the logical tables to increase performance. Anyway, what I am many times struggling with is the Logical Levels (in Content tab) where the level of each dimension is to be set. In a star schema (one-to-many) this is pretty straight forward and easy to set up, but when the Business Model (and physical model) gets more complex I sometimes struggle with the aggregates - to get them work/appear with different dimensions. (Using the menu "More" - "Get levels" does not allways give the best solution......far from). I have some combinations of left- and right outer join as well, making it even more complicated for the BI server.
For instance - I have about 10-12 different dimensions - should all of them allways be connected to each fact table? Either on Detail or Total level. I can see the use of the logical levels when using aggregate fact tables (on quarter, month etc.), but is it better just to skip the logical level setup when no aggregate tables are used? Sometimes it seems like that is the easiest approach...
Does anyone have a best practice concerning this issue? I have googled for this but I haven't found anything good yet. Any ideas/articles are highly appreciated.Hi User,
For instance - I have about 10-12 different dimensions - should all of them always be connected to each fact table? Either on Detail or Total level.It not necessary to connect to all dimensions completely based on the report that you are creating ,but as a best practice we should maintain all at Detail level only,when you are mentioning any join conditions in physical layer
for example for the sales table if u want to report at ProductDimension.ProductnameLevel then u should use detail level else total level(at Product,employee level)
Get Levels. (Available only for fact tables) Changes aggregation content. If joins do not exist between fact table sources and dimension table sources (for example, if the same physical table is in both sources), the aggregation content determined by the administration tool will not include the aggregation content of this dimension.
Source admin guide(get level definition)
thanks,
Saichand.v -
Best practices for setting up users on a small office network?
Hello,
I am setting up a small office and am wondering what the best practices/steps are to setup/manage the admin, user logins and sharing privileges for the below setup:
Users: 5 users on new iMacs (x3) and upgraded G4s (x2)
Video Editing Suite: Want to connect a new iMac and a Mac Pro, on an open login (multiple users)
All machines are to be able to connect to the network, peripherals and external hard drive. Also, I would like to setup drop boxes as well to easily share files between the computers (I was thinking of using the external harddrive for this).
Thank you,Hi,
Thanks for your posting.
When you install AD DS in the hub or staging site, disconnect the installed domain controller, and then ship the computer to the remote site, you are disconnecting a viable domain controller from the replication topology.
For more and detail information, please refer to:
Best Practices for Adding Domain Controllers in Remote Sites
http://technet.microsoft.com/en-us/library/cc794962(v=ws.10).aspx
Regards.
Vivian Wang -
Add fields in transformations in BI 7 (best practice)?
Hi Experts,
I have a question regarding transformation of data in BI 7.0.
Task:
Add new fields in a second level DSO, based on some manipulation of first level DSO data. In 3.5 we would have used a start routine to manipulate and append the new fields to the structure.
Possible solutions:
1) Add the new fields to first level DSO as well (empty)
- Pro: Simple, easy to understand
- Con: Disc space consuming, performance degrading when writing to first level DSO
2) Use routines in the field mapping
- Pro: Simple
- Con: Hard to performance optimize (we could of course fill an internal table in the start routine and then read from this to get some performance optimization, but the solution would be more complex).
3) Update the fields in the End routine
- Pro: Simple, easy to understand, can be performance optimized
- Con: We need to ensure that the data we need also exists (i.e. if we have one field in DSO 1 that we only use to calculate a field in DSO 2, this would also have to be mapped to DSO 2 in order to exist in the routine).
Does anybody know what is best practice is? Or do you have any experience regarding what you see as the best solution?
Thank you in advance,
MikaelHi Mikael.
I like the 3rd option and have used this many many times. In answer to your question:-
Update the fields in the End routine
- Pro: Simple, easy to understand, can be performance optimized - Yes have read and tested this that it works faster. A OSS consulting note is out there indicating the speed of the end routine.
- Con: We need to ensure that the data we need also exists (i.e. if we have one field in DSO 1 that we only use to calculate a field in DSO 2, this would also have to be mapped to DSO 2 in order to exist in the routine). - Yes but by using the result package, the manipulation can be done easily.
Hope it helps.
Thanks,
Pom -
Hello,
I have a customer who uses temp tables all over their application.
This customer is a novice and the app has its roots in VB6. We are converting it to .net
I would really like to know the best practice for using temp tables.
I have seen code like this in the app.
CR2.Database.Tables.Item(1).Location = "tempdb.dbo.[##Scott_xwPaySheetDtlForN]"
That seems to work, though i do not know why the full tempdb.dbo.[## is required.
However, when i use this in the new report I am doing I get runtime errors.
i also tried this
CR2.Database.Tables.Item(1).Location = "##Scott_xwPaySheetDtlForN"
I did not get errors, but I was returned data i did not expect.
Before i delve into different ways to do this, i could use some help with a good pattern to use.
thanksHi Scott,
Are you using the RDC still? It's not clear but looks like it.
We had an API that could piggy back the HDBC handle in the RDC ( craxdrt.dll ) but that API is no longer available in .NET. Also, the RDC is not supported in .NET since .NET uses the framework and RDC is COM.
Work around is to copy the temp data into a data set and then set location to the data set. There is no way that I know of to get to the tempdb from .NET. Reason being is there is no CR API to set the owner of the table to the user, MS SQL Server locks the tempdb to that user has exclusinve rights on it.
Thank you
Don -
Best Practice for Significant Amounts of Data
This is basically a best-practice/concept question and it spans both Xcelsius & Excel functions:
I am working on a dashboard for the US Military to report on some basic financial transactions that happen on bases around the globe. These transactions fall into four categories, so my aggregation is as follows:
Year,Month,Country,Base,Category (data is Transaction Count and Total Amount)
This is a rather high level of aggregation, and it takes about 20 million transactions and aggregates them into about 6000 rows of data for a two year period.
I would like to allow the users to select a Category and a country and see a chart which summarizes transactions for that country ( X-axis for Month, Y-axis Transaction Count or Amount ). I would like each series on this chart to represent a Base.
My problem is that 6000 rows still appears to be too many rows for an Xcelsius dashboard to handle. I have followed the Concatenated Key approach and used SUMIF to populate a matrix with the data for use in the Chart. This matrix would have Bases for row headings (only those within the selected country) and the Column Headings would be Month. The data would be COUNT. (I also need the same matrix with Dollar Amounts as the data).
In Excel this matrix works fine and seems to be very fast. The problem is with Xcelsius. I have imported the Spreadsheet, but have NOT even created the chart yet and Xcelsius is CHOKING (and crashing). I changed Max Rows to 7000 to accommodate the data. I placed a simple combo box and a grid on the Canvas u2013 BUT NO CHART yet u2013 and the dashboard takes forever to generate and is REALLY slow to react to a simple change in the Combo Box.
So, I guess this brings up a few questions:
1) Am I doing something wrong and did I miss something that would prevent this problem?
2) If this is standard Xcelsius behavior, what are the Best Practices to solve the problem?
a. Do I have to create 50 different Data Ranges in order to improve performance (i.e. Each Country-Category would have a separate range)?
b. Would it even work if it had that many data ranges in it?
c. Do you aggregate it as a crosstab (Months as Column headings) and insert that crosstabbed data into Excel.
d. Other ideas that Iu2019m missing?
FYI: These dashboards will be exported to PDF and distributed. They will not be connected to a server or data source.
Any thoughts or guidance would be appreciated.
Thanks,
DavidHi David,
I would leave your query
"Am I doing something wrong and did I miss something that would prevent this problem?"
to the experts/ gurus out here on this forum.
From my end, you can follow
TOP 10 EXCEL TIPS FOR SUCCESS
https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/204c3259-edb2-2b10-4a84-a754c9e1aea8
Please follow the Xcelsius Best Practices at
https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/a084a11c-6564-2b10-79ac-cc1eb3f017ac
In order to reduce the size of xlf and swf files follow
http://myxcelsius.com/2009/03/18/reduce-the-size-of-your-xlf-and-swf-files/
Hope this helps to certain extent.
Regards
Nikhil -
Best-practice for Catalog Views ? :|
Hello community,
A best practice question:
The situtation: I have several product categories (110), several items in those categories (4000) and 300 end-users. I would like to know which is the best practice for segment the catalog. I mean, some users should only see categories 10,20 & 30. Other users only category 80, etc. The problem is how can I implement this ?
My first idea is:
1. Create 110 Procurement Catalogs (1 for every prod.category). Each catalog should contain only its product category.
2. Assign in my Org Model, in a user-level all the "catalogs" that the user should access.
Do you have any idea in order to improve this ?
Saludos desde Mexico,
DiegoHi,
Your way of doing will work, but you'll get maintenance issues (to many catalogs, and catalog link to maintain for each user).
The other way is to built your views in CCM, and assign these views to the users, either on the roles (PFCG) or on the user (SU01). The problem is that with CCM 1.0 this is limitated, cause you'll have to assign one by one the items to each view (no dynamic or mass processes), it has been enhanced in CCM 2.0.
My advice:
-Challenge your customer about views, and try to limit the number of views, with for example strategic and non strategic
-With CCM 1.0 stick to the procurement catalogs, or implement BADIs to assign items to the views (I experienced it, it works, but is quite difficult), but with a limitated number of views
Good luck.
Vadim -
Best practice on sqlite for games?
Hi Everyone, I'm new to building games/apps, so I apologize if this question is redundant...
I am developing a couple games for Android/iOS, and was initially using a regular (un-encrypted) sqlite database. I need to populate the database with a lot of info for the games, such as levels, store items, etc. Originally, I was creating the database with SQL Manager (Firefox) and then when I install a game on a device, it would copy that pre-populated database to the device. However, if someone was able to access that app's database, they could feasibly add unlimited coins to their account, unlock every level, etc.
So I have a few questions:
First, can someone access that data in an APK/IPA app once downloaded from the app store, or is the method I've been using above secure and good practice?
Second, is the best solution to go with an encrypted database? I know Adobe Air has the built-in support for that, and I have the perfect article on how to create it (Ten tips for building better Adobe AIR applications | Adobe Developer Connection) but I would like the expert community opinion on this.
Now, if the answer is to go with encrypted, that's great - but, in doing so, is it possible to still use the copy function at the beginning or do I need to include all of the script to create the database tables and then populate them with everything? That will be quite a bit of script to handle the initial setup, and if the user was to abandon the app halfway through that population, it might mess things up.
Any thoughts / best practice / recommendations are very appreciated. Thank you!I'll just post my own reply to this.
What I ended up doing, was creating the script that self-creates the database and then populates the tables (as unencrypted... the encryption portion is commented out until store publishing). It's a tremendous amount of code, completely repetitive with the exception of the values I'm entering, but you can't do an insert loop or multi-line insert statement in AIR's SQLite so the best move is to create everything line by line.
This creates the database, and since it's not encrypted, it can be tested using Firefox's SQLite manager or some other database program. Once you're ready for deployment to the app stores, you simply modify the above set to use encryption instead of the unencrypted method used for testing.
So far this has worked best for me. If anyone needs some example code, let me know and I can post it. -
We have an homegrown Access database originally designed in 2000 that now has an SQL back-end. The database has not yet been converted to a higher format such as Access 2007 since at least 2 users are still on Access 2003. It is fine if suggestions
will only work with Access 2007 or higher.
I'm trying to determine if our database is the best place to do this or if we should look at another solution. We have thousands of products each with a single identifier. There are customers who provide us regular sales reporting for what was
sold in a given time period -- weekly, monthly, quarterly, yearly time periods being most important. This reporting may or may not include all of our product identifiers. The reporting is typically based on calendar-defined timing although we have
some customers who have their own calendars which may not align to a calendar month or calendar year so recording the time period can be helpful.
Each customer's sales report can contain anything from 1,000-20,000 rows of products for each report. Each customer report is different and they typically have between 4-30 columns of data for each product; headers are consistently named. The
product identifiers included may vary by customer and even within each report for a customer; the data in the product identifier row changes each week. Headers include a wide variety of data such as overall on hand, overall on order, unsellable on hand,
returns, on hand information for each location or customer grouping, sell-through units information for each location or customer grouping for that given time period, sell-through dollars information for each location or customer grouping for that given time
period, sell-through units information for each location or customer grouping for a cumulative time period (same thing for dollars), warehouse on hands, warehouse on orders, the customer's unique categorization of our product in their system, the customer's
current status code for that product, and so on.
Currently all of this data is stored in a multitude of Excel spreadsheets (by customer, division and time period). Due to overall volume of information and number of Excel sheets, cross-referencing can take considerable time. Is it possible to
set-up tables for our largest customers so I can create queries and pivot tables to more quickly look at sales-related information by category, by specific product(s), by partner, by specific products or categories across partners, by specific products or
categories across specific weeks/months/years, etc. We do have a separate product table so only the product identifier or a junction table may be needed to pull in additional information from the product table with queries. We do need to maintain
the sales reporting information indefinitely.
I welcome any suggestions, best practice or resources (books, web, etc).
Many thanks!Currently all of this data is stored in a multitude of Excel spreadsheets (by customer, division and time period). Due to overall volume of information and number of Excel sheets, cross-referencing can take considerable time. Is it possible to
set-up tables .....
I assume you want to migrate to SQL Server.
Your best course of action is to hire a professional database designer for a short period like a month.
Once you have the database, you need to hire a professional DBA to move your current data from Access & Excel into the new SQL Server database.
Finally you have to hire an SSRS professional to design reports for your company.
It is also beneficial if the above professionals train your staff while building the new RDBMS.
Certain senior SQL Server professionals may be able to do all 3 functions in one person: db design, database administration/ETL & business intelligence development (reports).
Kalman Toth Database & OLAP Architect
SELECT Video Tutorials 4 Hours
New Book / Kindle: Exam 70-461 Bootcamp: Querying Microsoft SQL Server 2012 -
Best Practice to fetch SQL Server data and Insert into Oracle Tables
Hello,
I want to read sqlserver data everry half an hour and write into oracle tables ( in two different databases). What is the best practice for doing this?
We do not have any database dblinks from oracle to sqlserver and vice versa.
Any help is highly appreciable?
ThanksWell, that's easy:
use a TimerTask to do the following every half an hour:
- open a connection to sql server
- open two connections to the oracle databases
- for each row you read from the sql server, do the inserts into the oracle databases
- commit
- close all connections
Maybe you are looking for
-
Number of System Log Errors Increasing During Nightly Backup - What Can Be Done?
Right now, I'm getting quite a number of warnings and errors with my nightly (wbadmin) backups to external USB drive using Windows 8.1 and the Spring Update. Fortunately, the VSS-integrated system image backups are still sound, but it seems as thoug
-
Normally my phone would connect with my computer and replace all the songs on my phone with all the songs available in my library. Now it seems as if the songs on my phone are permanently there, and nothing from my iTunes library will sync. I tried s
-
Database Refresh From ASM Filesystem to Local Filesystem
Hi ALL, I am performing a database refresh from production server to a demo server. Our Production database is 11.2.0.1 and it is using ASM filesystem to keep the data, redo and other files in ASM disks. On the other hand demo server is not having AS
-
I have safari on my iPhone 3gs but it won't stay open any ideas on how to fix it
i have safari on my iphone 3gs but it wont stay open any ideas on how to fix it?
-
4 questions on export/importing graphics for Web and Photoshop.
Hi, I have 4 questions here, and if you can answer one or more, I'd be happy!! :-) 1. In general, can there be a loss of quality when I export a file from Illustrator CS3 to Photoshop CS3, and then re-import it into Illustrator? My guess is that you