Advertisement
If you have a new account but are having problems posting or verifying your account, please email us on hello@boards.ie for help. Thanks :)
Hello all! Please ensure that you are posting a new thread or question in the appropriate forum. The Feedback forum is overwhelmed with questions that are having to be moved elsewhere. If you need help to verify your account contact hello@boards.ie

SQL2000 - Replication

Options
  • 29-06-2006 10:15am
    #1
    Moderators, Recreation & Hobbies Moderators Posts: 10,912 Mod ✭✭✭✭


    I have a server hosting a database of 40Gb of which 39Gb are stats. These stats are accessed once a month by a member of my team and as soon as the results pop up, the stats can be deleted. So each month we start off with a small database and it grows by about 1Gb per day.

    Due to the fact that it takes 4 hours to restore a backup of this database on a different server I thought that I could use a replication system to send that 1Gb of data to a different server, either in real-time or a nightly batch.

    What I'd like to be able tol do though is once the data has been copied, I delete it at the source. That way my master server starts of each day with 1Gb of data. It'll take much less time to save the entire database elsewhere and much less time to restore it (I don't need to restore the stats, just the 1Gb of data that runs the solution).

    My problem is that with a replication system, if I delete the data on the master server isn't it going to delete it too on all the subscribers?

    In the end I want 1 server (master) that grows by 1Gb per day, sends the info elsewhere and then deletes that 1Gb from the master. Each day the master starts off at 1Gb while the other server grows by 1Gb.

    Should I just create a trigger on the tables concerned that updates a different table on a different server when an INSERT is detected and takes no action for a DELETE?


Comments

  • Moderators, Society & Culture Moderators Posts: 9,689 Mod ✭✭✭✭stevenmu


    Ponster wrote:
    My problem is that with a replication system, if I delete the data on the master server isn't it going to delete it too on all the subscribers?
    Yes it will, after replication the master server (publisher) and the subscribers should contain exactly the same data. There may be ways around that using filtering but I think that'd be more trouble than it's worth, unreliable and difficult to troubleshoot.
    In the end I want 1 server (master) that grows by 1Gb per day, sends the info elsewhere and then deletes that 1Gb from the master. Each day the master starts off at 1Gb while the other server grows by 1Gb.

    Should I just create a trigger on the tables concerned that updates a different table on a different server when an INSERT is detected and takes no action for a DELETE?
    That looks like a better solution to me. Another possibility might be to schedule a DTS job to run every night to copy over that day's stats to the other server and then delete them off the master server. It should be fairly easy to do it all as one complete transaction, meaning either they all copy across and get deleted or if something goes wrong then none are copied or deleted, this'd be fairly eay to troubleshoot and recover from any problems.

    You could also write a fairly simple program to accomplish the same thing, or ideally change the stats generating code to just write to the other server in the first place, altough presumably you've thought of that and ruled it out for some reason.


  • Moderators, Recreation & Hobbies Moderators Posts: 10,912 Mod ✭✭✭✭Ponster


    Indeed. I have 7 servers, each with a program that generates a line in a log file for every telephone call it processes. An agent monitors the log file and updates the master database as the file is written to. So I'm not in a position to direct the data to multiple targets.

    But you're right, a DTS does seem like the best idea (I had already started one when I realised you had replied :) ) but normally I use Enterprise Manager to perform DTS actions. How do I write a tsql script that tests if the DTS has been carried out or not. I'd like to avoid the truncate command from happening in case the data wasn"t copied properly.

    I can probably use xp_cmdshell in a script to call the DTS job and if it generates an error number that I can use in an 'if' statement then It should be ok.

    Sorry if this seems like I'm just talking out load to myself but it does help :)


  • Registered Users Posts: 15,443 ✭✭✭✭bonkey


    Ponster wrote:
    These stats are accessed once a month by a member of my team and as soon as the results pop up, the stats can be deleted.

    I'm not sure if UI'm understandnig this correctly, but you haev multiple GB of stats, but as soon as someone does something with them, they can be deleted?

    Would it be possible to pre-prepare whatever it is that the person who will look at them wants, and then delete the "bulk" of the data?
    My problem is that with a replication system, if I delete the data on the master server isn't it going to delete it too on all the subscribers?
    I'm working from (old) memory, so don't take this as gospel, but yes, it is possible to avoid this. However, its probably more trouble than its worth.
    Should I just create a trigger on the tables concerned that updates a different table on a different server when an INSERT is detected and takes no action for a DELETE?
    I wouldn't recommend it. Your publisher's performance then becomes dependant on a number of things, including the subscription server(s) being available and the network performance to them.

    If you don't need the data in real-time, don't send the data in real-time.

    but normally I use Enterprise Manager to perform DTS actions. How do I write a tsql script that tests if the DTS has been carried out or not. I'd like to avoid the truncate command from happening in case the data wasn"t copied properly.

    Why not use Enterprise Manager?

    Define a DTS job with a transfer step and a truncate step which is dependant on the transfer completnig successfully. Transfer fails, no truncate. TBH, you'll possibly also want some other steps too, like backups (or else ensure seperately your backups occur before you do this nightly operation) and notifications.

    As an alternative, have a "backup" table on your publisher database. In the DTS, empty it, copy your day's data in there, then delete from your original table. From here, transfer to your destination server. If something goes wrong, you've got a 24-hour window where teh data is still on teh Db, but not in the original table.

    Incidentally, if your restores are taking 4 hours, it sounds like you're restoring entire tables/databases. If so, have you considered using partitioned tables to (partially) get around this?

    jc


Advertisement